2.7 Pre-release1 available for testing

It doesn't *recommend* running cmake in-source. It is just that the
instructions are written that way. When generating VS solution files,
building in-source is not a problem. It is when generating makefiles, as
the generated makefiles will overwrite those provided with LLVM's
sources.

Ah, yes, you're right. Thanks!

cmake will generate makefiles if you wish. Just use this command:

cmake -G "NMake Makefiles"

but then you should use a different directory for the build.

*nods* Or just omitting the argument and taking the default settings
also works, provided you're in a different directory.

Suppossing that you have the LLVM source code in c:/llvm, do you have

c:/llvm/tools/clang/CMakeLists.txt

?

If that is not present, most likely your clang setup is wrong. The LLVM
cmake build test for the presence of the above file and, if found,
automatically builds clang. If the test fails, clang is ignored.

Ah, that was the problem, I had a directory called clang-2.7. Renamed
it to clang and the whole thing works fine, generating clang.exe that
runs on a small C program and produces x86 assembler output that looks
correct to eyeball inspection.

One small surprise/oddity:

C:\d\l\bin>dir cl*
Volume in drive C is OS
Volume Serial Number is 0422-C2D0

Directory of C:\d\l\bin

20/03/2010 21:11 30,767,616 clang++.exe
20/03/2010 21:11 30,767,616 clang.exe
20/03/2010 21:09 663 clang.exe.embed.manifest
20/03/2010 21:09 728 clang.exe.embed.manifest.res
20/03/2010 21:09 621 clang.exe.intermediate.manifest
20/03/2010 21:07 101 clang.exe.resource.txt
20/03/2010 21:11 81,173,048 clang.ilk
20/03/2010 21:11 137,416,704 clang.pdb
               8 File(s) 280,127,097 bytes
               0 Dir(s) 6,769,586,176 bytes free

C:\d\l\bin>fc/b clang.exe clang++.exe
Comparing files clang.exe and CLANG++.EXE
FC: no differences encountered

Is this intentional?

Russell Wallace <russell.wallace@gmail.com> writes:

[snip]

C:\d\l\bin>fc/b clang.exe clang++.exe
Comparing files clang.exe and CLANG++.EXE
FC: no differences encountered

Is this intentional?

Yes.

The 2.7 binaries are available for testing:
http://llvm.org/pre-releases/2.7/pre-release1/

You will also find the source tarballs there as well.

We rely on the community to help make our releases great, so please help
test 2.7 if you can. Please follow these instructions to test 2.7:

/To test llvm-gcc:/

1) Compile llvm from source and untar the llvm-test in the projects
directory (name it llvm-test or test-suite). Choose to use a pre-
compiled llvm-gcc or re-compile it yourself.

2) Run make check, report any failures (FAIL or unexpected pass). Note
that you need to reconfigure llvm with llvm-gcc in your path or with -- with-llvmgccdir

3) Run "make TEST=nightly report". Compare these results to a 2.6 llvm-test nightly report or send the results to the list. For supported targets, we'll try to examine the results, but its best if you can do the comparison yourself.

Hi Tanya,

Attached are the nightly test results when run with llvm-gcc
(report.nightly.txt), and when run with clang (clang-report.nightly.txt).

Tests were run on x86-64, Debian unstable, Linux 2.6.33, gcc 4.4.3,
64-bit. I built srcdir == objdir, I have built llvm and clang myself,
and used the binaries for llvm-gcc.

1. llvm-gcc 2.7 vs 2.6
compared to my results from Aug 31 2009, ignoring CBE failures:

new JIT failures:
MultiSource/Applications/spiff/spiff
SingleSource/Regression/C/2004-03-15-IndirectGoto

2. llvm-gcc 2.7 vs clang 2.7
When comparing the 2.7 llvm-gcc and clang results I see these
differences (is llvm-gcc considered baseline for clang?):
ALL FAIL (pass in llvm-gcc):
MultiSource/Benchmarks/PAQ8p/paq8p
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
MultiSource/Benchmarks/Prolangs-C/archie-client/archie
MultiSource/Benchmarks/Prolangs-C/cdecl/cdecl
SingleSource/Benchmarks/Misc-C++/bigfib
SingleSource/Regression/C++/EH/ConditionalExpr
SingleSource/Regression/C++/EH/ctor_dtor_count-2
SingleSource/Regression/C++/EH/function_try_block
SingleSource/Regression/C++/EH/simple_throw
SingleSource/UnitTests/2006-12-04-DynAllocAndRestore
SingleSource/UnitTests/Vector/SSE/sse.expandfft
SingleSource/UnitTests/Vector/SSE/sse.stepfft

JIT failures in clang, pass in llvm-gcc:
MultiSource/Applications/sqlite3/sqlite3
SingleSource/Regression/C++/ofstream_ctor

3. Some performance regressions GCC/LLC (2.6 -> 2.7), but keep in mind
that I wasn't using GCC 4.4.3 as comparison for llvm 2.6!

MultiSource/Applications/hexxagon/hexxagon 1.22 -> 1.14
MultiSource/Applications/lua/lua 0.91 -> 0.84
MultiSource/Applications/obsequi/Obsequi 0.93 -> 0.86
MultiSource/Benchmarks/ASC_Sequoia/CrystalMk/CrystalMk 1.01 -> 0.91
MultiSource/Benchmarks/FreeBench/fourinarow/fourinarow 0.94 -> 0.75
MultiSource/Benchmarks/FreeBench/neural/neural 1.0 -> 0.9
MultiSource/Benchmarks/MiBench/telecomm-gsm/telecomm-gsm 1.06 -> 0.9
MultiSource/Benchmarks/Olden/treeadd/treeadd 11.44 -> 9.89
MultiSource/Benchmarks/Olden/tsp/tsp 1.14 -> 1.02
MultiSource/Benchmarks/Ptrdist/anagram/anagram 1.33 -> 1.23
SingleSource/Benchmarks/Dhrystone/dry 7.32 -> 5.16
SingleSource/Benchmarks/Dhrystone/fldry 8.02 -> 6.65
....

I'll have to write a script to compare the results, its boring and
inaccurate to do by hand.

Will go through the bugzilla tomorrow and see if I need to open new bugs
for this stuff.

  /To test clang:/

1) Compile llvm and clang from source.

2) Run make check for llvm.

3) Run make -C tools/clang-2.6 test VERBOSE=1 (report any failures or
unexpected passes)

Surely you meant tools/clang-2.7

FYI I pulled the following revisions for ClamAV's llvm on top of 2.7:
r98349
r98410
r98447
r98508
r99143
r99146
r99147
r99160
r99400

I don't know if any of these qualify as regression fixes for 2.7, I'll
leave it up to you to decide if you want to put them into 2.7 or not.

Best regards,
--Edwin

report.nightly.txt (82.1 KB)

clang-report.nightly.txt (82.1 KB)

The 2.7 binaries are available for testing:
http://llvm.org/pre-releases/2.7/pre-release1/

You will also find the source tarballs there as well.

We rely on the community to help make our releases great, so please help
test 2.7 if you can. Please follow these instructions to test 2.7:

/To test llvm-gcc:/

1) Compile llvm from source and untar the llvm-test in the projects
directory (name it llvm-test or test-suite). Choose to use a pre-
compiled llvm-gcc or re-compile it yourself.

2) Run make check, report any failures (FAIL or unexpected pass). Note
that you need to reconfigure llvm with llvm-gcc in your path or with -- with-llvmgccdir

3) Run "make TEST=nightly report". Compare these results to a 2.6 llvm-test nightly report or send the results to the list. For supported targets, we'll try to examine the results, but its best if you can do the comparison yourself.

Hi Tanya,

Attached are the nightly test results when run with llvm-gcc
(report.nightly.txt), and when run with clang (clang-report.nightly.txt).

Thanks for testing the release!

Tests were run on x86-64, Debian unstable, Linux 2.6.33, gcc 4.4.3,
64-bit. I built srcdir == objdir, I have built llvm and clang myself,
and used the binaries for llvm-gcc.

1. llvm-gcc 2.7 vs 2.6
compared to my results from Aug 31 2009, ignoring CBE failures:

new JIT failures:
MultiSource/Applications/spiff/spiff
SingleSource/Regression/C/2004-03-15-IndirectGoto

Yes, I'm seeing the second regression on darwin too. Please file a bug for the other one if you havent already.

2. llvm-gcc 2.7 vs clang 2.7
When comparing the 2.7 llvm-gcc and clang results I see these
differences (is llvm-gcc considered baseline for clang?):
ALL FAIL (pass in llvm-gcc):
MultiSource/Benchmarks/PAQ8p/paq8p
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
MultiSource/Benchmarks/Prolangs-C/archie-client/archie
MultiSource/Benchmarks/Prolangs-C/cdecl/cdecl
SingleSource/Benchmarks/Misc-C++/bigfib
SingleSource/Regression/C++/EH/ConditionalExpr
SingleSource/Regression/C++/EH/ctor_dtor_count-2
SingleSource/Regression/C++/EH/function_try_block
SingleSource/Regression/C++/EH/simple_throw
SingleSource/UnitTests/2006-12-04-DynAllocAndRestore
SingleSource/UnitTests/Vector/SSE/sse.expandfft
SingleSource/UnitTests/Vector/SSE/sse.stepfft

JIT failures in clang, pass in llvm-gcc:
MultiSource/Applications/sqlite3/sqlite3
SingleSource/Regression/C++/ofstream_ctor

This isn't part of our release criteria. So, these are not release blockers.

3. Some performance regressions GCC/LLC (2.6 -> 2.7), but keep in mind
that I wasn't using GCC 4.4.3 as comparison for llvm 2.6!

MultiSource/Applications/hexxagon/hexxagon 1.22 -> 1.14
MultiSource/Applications/lua/lua 0.91 -> 0.84
MultiSource/Applications/obsequi/Obsequi 0.93 -> 0.86
MultiSource/Benchmarks/ASC_Sequoia/CrystalMk/CrystalMk 1.01 -> 0.91
MultiSource/Benchmarks/FreeBench/fourinarow/fourinarow 0.94 -> 0.75
MultiSource/Benchmarks/FreeBench/neural/neural 1.0 -> 0.9
MultiSource/Benchmarks/MiBench/telecomm-gsm/telecomm-gsm 1.06 -> 0.9
MultiSource/Benchmarks/Olden/treeadd/treeadd 11.44 -> 9.89
MultiSource/Benchmarks/Olden/tsp/tsp 1.14 -> 1.02
MultiSource/Benchmarks/Ptrdist/anagram/anagram 1.33 -> 1.23
SingleSource/Benchmarks/Dhrystone/dry 7.32 -> 5.16
SingleSource/Benchmarks/Dhrystone/fldry 8.02 -> 6.65
....

Unfortunately, we just don't have enough man power to have performance be a release criteria at this time. We also need a better infrastructure in place to track this stuff (Daniel is working on it).

I'll have to write a script to compare the results, its boring and
inaccurate to do by hand.

Will go through the bugzilla tomorrow and see if I need to open new bugs
for this stuff.

/To test clang:/

1) Compile llvm and clang from source.

2) Run make check for llvm.

3) Run make -C tools/clang-2.6 test VERBOSE=1 (report any failures or
unexpected passes)

Surely you meant tools/clang-2.7

Yes.

FYI I pulled the following revisions for ClamAV's llvm on top of 2.7:
r98349
r98410
r98447
r98508
r99143
r99146
r99147
r99160
r99400

I don't know if any of these qualify as regression fixes for 2.7, I'll
leave it up to you to decide if you want to put them into 2.7 or not.

I'll have to discuss with Chris about these. Its technically not a release blocker.

Thanks,
-Tanya

Thanks for testing the release!

Tests were run on x86-64, Debian unstable, Linux 2.6.33, gcc 4.4.3,
64-bit. I built srcdir == objdir, I have built llvm and clang myself,
and used the binaries for llvm-gcc.

1. llvm-gcc 2.7 vs 2.6
compared to my results from Aug 31 2009, ignoring CBE failures:

new JIT failures:
MultiSource/Applications/spiff/spiff
SingleSource/Regression/C/2004-03-15-IndirectGoto

Yes, I'm seeing the second regression on darwin too. Please file a bug for the other one if you havent already.

I haven't, will do tomorrow.

2. llvm-gcc 2.7 vs clang 2.7
When comparing the 2.7 llvm-gcc and clang results I see these
differences (is llvm-gcc considered baseline for clang?):
ALL FAIL (pass in llvm-gcc):
MultiSource/Benchmarks/PAQ8p/paq8p
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
MultiSource/Benchmarks/Prolangs-C/archie-client/archie
MultiSource/Benchmarks/Prolangs-C/cdecl/cdecl
SingleSource/Benchmarks/Misc-C++/bigfib
SingleSource/Regression/C++/EH/ConditionalExpr
SingleSource/Regression/C++/EH/ctor_dtor_count-2
SingleSource/Regression/C++/EH/function_try_block
SingleSource/Regression/C++/EH/simple_throw
SingleSource/UnitTests/2006-12-04-DynAllocAndRestore
SingleSource/UnitTests/Vector/SSE/sse.expandfft
SingleSource/UnitTests/Vector/SSE/sse.stepfft

JIT failures in clang, pass in llvm-gcc:
MultiSource/Applications/sqlite3/sqlite3
SingleSource/Regression/C++/ofstream_ctor

This isn't part of our release criteria. So, these are not release blockers.

Ok, something to keep in mind for LLVM 2.8 then.

3. Some performance regressions GCC/LLC (2.6 -> 2.7), but keep in mind
that I wasn't using GCC 4.4.3 as comparison for llvm 2.6!

MultiSource/Applications/hexxagon/hexxagon 1.22 -> 1.14
MultiSource/Applications/lua/lua 0.91 -> 0.84
MultiSource/Applications/obsequi/Obsequi 0.93 -> 0.86
MultiSource/Benchmarks/ASC_Sequoia/CrystalMk/CrystalMk 1.01 -> 0.91
MultiSource/Benchmarks/FreeBench/fourinarow/fourinarow 0.94 -> 0.75
MultiSource/Benchmarks/FreeBench/neural/neural 1.0 -> 0.9
MultiSource/Benchmarks/MiBench/telecomm-gsm/telecomm-gsm 1.06 -> 0.9
MultiSource/Benchmarks/Olden/treeadd/treeadd 11.44 -> 9.89
MultiSource/Benchmarks/Olden/tsp/tsp 1.14 -> 1.02
MultiSource/Benchmarks/Ptrdist/anagram/anagram 1.33 -> 1.23
SingleSource/Benchmarks/Dhrystone/dry 7.32 -> 5.16
SingleSource/Benchmarks/Dhrystone/fldry 8.02 -> 6.65
....

Unfortunately, we just don't have enough man power to have performance be a release criteria at this time. We also need a better infrastructure in place to track this stuff (Daniel is working on it).

Yes, I understand that.

I'll have to write a script to compare the results, its boring and
inaccurate to do by hand.

Will go through the bugzilla tomorrow

I still didn't have time to do this unfortunately.

and see if I need to open new bugs
for this stuff.

/To test clang:/

1) Compile llvm and clang from source.

2) Run make check for llvm.

3) Run make -C tools/clang-2.6 test VERBOSE=1 (report any failures or
unexpected passes)

Surely you meant tools/clang-2.7

Yes.

FYI I pulled the following revisions for ClamAV's llvm on top of 2.7:
r98349
r98410
r98447
r98508
r99143
r99146
r99147
r99160
r99400

I don't know if any of these qualify as regression fixes for 2.7, I'll
leave it up to you to decide if you want to put them into 2.7 or not.

I'll have to discuss with Chris about these. Its technically not a release blocker.

Meanwhile I pulled these too:
99762 (Evan approved)
99883 (leakfix, so I don't think it qualifies for release criteria)

Best regards,
--Edwin

Sorry for taking so long, I opened a PR for spiff, but its not a
regression, see PR6785.
To sum up: all tests (including native) run out of memory, llvm 2.6 had
this problem too, but it didn't miscompare (it was all Out of memory).
Now in 2.7 the JIT crashes when running out of memory.

While that is a bug, I'm not sure if its part of the release criteria.
I tried to make some changes in MallocSlabAllocator to not crash, but
then FoldingSet crashed on OOM.

P.S.: looking at the current 2.7 release blockers I only see PR6696 as
relevant, the others are warnings, experimental code, and a fortran failure.
Is that the case, or are the others release blockers too?

Best regards,
--Edwin

Thanks for testing the release!

Tests were run on x86-64, Debian unstable, Linux 2.6.33, gcc 4.4.3,
64-bit. I built srcdir == objdir, I have built llvm and clang myself,
and used the binaries for llvm-gcc.

1. llvm-gcc 2.7 vs 2.6
compared to my results from Aug 31 2009, ignoring CBE failures:

new JIT failures:
MultiSource/Applications/spiff/spiff
SingleSource/Regression/C/2004-03-15-IndirectGoto

Yes, I'm seeing the second regression on darwin too. Please file a bug for the other one if you havent already.

I haven't, will do tomorrow.

Sorry for taking so long, I opened a PR for spiff, but its not a
regression, see PR6785.
To sum up: all tests (including native) run out of memory, llvm 2.6 had
this problem too, but it didn't miscompare (it was all Out of memory).
Now in 2.7 the JIT crashes when running out of memory.

If thats the case, then not a release blocker.

P.S.: looking at the current 2.7 release blockers I only see PR6696 as
relevant, the others are warnings, experimental code, and a fortran failure.
Is that the case, or are the others release blockers too?

- warnings are release blockers.
- fortran build failure may end up not being a release blocker. This is under discussion since we had originally added compiling llvm-gcc w/ fortran support as part of the criteria, but its not being tested at all on darwin (which is a huge problem and I dont think we should only be testing it right before a release).

All the bugs have been updated with additional information. I will remove them if its determined to not be a release blocker (ie. PR6778), but for now.. they all are.

-Tanya