polyhedron 2005 results for llvm svn

I am finding with the patch that all of the Polyhedron 2005
benchmarks pass on i686-apple-darwin9. Could someone clarify the
regression rules for releases? Not building a secondary language
on a primary target is usually considered a P1 regression for
FSF gcc. Not doing so here gives one the impression that llvm.org
isn't playing by the same rules. No one is ever going to want to
use these releases if they don't follow similar release guidelines
for maintaining usability between releases.
             Jack
ps I did a test run of make check-gfortran for the build but it
gets a lot of false positives on failures due to the bogus flags
on from darwin.h.

Hi,

I have a project idea, that I may pursue in the future. In the meantime, I think I need to spend the next few months getting familiar with LLVM and the code, which I can do through svn (thank you).

Also, for 2.5 and beyond, are you still in need of nightly testers? I will set up a Linux x86-64, and also have a Mac Pro (Leopard) for this. For Linux I plan on using the new Fedora 10, when it's released in November. Let me know if a nightly tester for 2.5 would be useful, or if you have enough.

thanks
Leo

my apologies for this appearing under this thread.

Hell, Jack

regression rules for releases? Not building a secondary language
on a primary target is usually considered a P1 regression for
FSF gcc.

The rules are basically the same. The only exception was gfortran:
1. Build fail on linux is considered a regression starting from 2.3 release
2. Build fail on darwin is considered a regression starting from 2.5 release

This was agreed recently, since previously noone even cared about
gfortran on darwin.

ps I did a test run of make check-gfortran for the build but it
gets a lot of false positives on failures due to the bogus flags
on from darwin.h.

Btw, have you compared the performance of llvm-gfortran vs native
gfortran? I remember you did this for 2.3 release, any changes since
then?

The rules for each release are mentioned here:
http://llvm.org/docs/HowToReleaseLLVM.html

I'm going to expand the first two sections and move them to another document since no one reads this one. Its is missing details on targets and languages we use as criteria, which I will update.

However, the release process is documented.

I am not saying that building gfortran is not important but you do realize that we have not included this as a criteria for releases _ever_ and secondly we branched on Oct. 6th and prerelease1 testing closed on Oct. 19th. In order for us to get a release out in a timely manner, we have strict rules on what gets merged in and what the cutoff point is. For every patch that gets merged in, I have to do a full round of all the testing for every target we are supporting. It takes a lot of time.

Lastly, we are not the FSF. We have our own rules for releases and I actually think we are even more strict when it comes to regressions. We try to release a high quality product in a timely manner. Its a difficult thing to balance.

I'm sorry you are unhappy with this decision.

-Tanya

Hi Leo,

The answer to the question "Would we like more testers?" is always
"Yes!". :slight_smile: The more ways we can test LLVM, the better it becomes.

Thanks!
-bw

I have a project idea, that I may pursue in the future. In the
meantime, I think I need to spend the next few months getting familiar
with LLVM and the code, which I can do through svn (thank you).

Also, for 2.5 and beyond, are you still in need of nightly testers? I
will set up a Linux x86-64, and also have a Mac Pro (Leopard) for
this. For Linux I plan on using the new Fedora 10, when it's released
in November. Let me know if a nightly tester for 2.5 would be useful,
or if you have enough.

More testers are always a benefit to the LLVM community. What would also be very helpful is if you could watch your tester results and file bugs for any regressions you see.

Thank you!

-Tanya