I have evaluated Polly’s performance on LLVM test-suite with latest LLVM (r188054) and Polly (r187981). Results can be viewed on: http://184.108.40.206:8000.
Hi Star Tan,
thanks for the update.
There are mainly five new tests and each test is run with 10 samples:
clang (run id = 27): clang -O3
pollyBasic (run id = 28): clang -O3 -load LLVMPolly.so
pollyNoGen (run id = 29): pollycc -O3 -mllvm -polly-optimizer=none -mllvm -polly-code-generator=none
pollyNoOpt (run id = 30): pollycc -O3 -mllvm -polly-optimizer=none
pollyOpt (run id = 31): pollycc -O3
Here is the performance comparison for the newest Polly:
It seems the machine is down/unreachable at the moment?
I restart the LNT server. It is available now.
Overall, there are 198 benchmarks improved and 16 benchmarks regressed. Especially, with those recent performance-oriented patch files for ScopDetect/ScopInfo/ScopDependences/…, we have significantly reduced the compile-time overhead of Polly for a large number of benchmarks, such as:
Very nice work!
However, Polly can still lead to significant compile-time overhead for many benchmarks.
As shown on:
there are 11 benchmarks whose compile time are more than 2x than clang. Furthermore, it seems that PollyDependence pass is still one of most expensive passes in Polly.
We need to look at these on a case by case base. 2x compile time
increase for large programs, where Polly is just run on small parts is
not what we want. However, for small micro kernels (e.g. Polybench)
where we can significantly increase the performance of the generated
code, this is in fact a good baseline - especially as we did not spend
too much time optimising this.
Yes, we should look into the compile-execution performance trade-off.
I have summarized some benchmarks (compile-time overhead is more than 200%) as follows:
compile_time(+1275.00%), execution_time (0%)
compile_time(+491.80%), execution_time (0%)
Results show that some Polly leads to significant compile-time overhead without any execution performance improvement.
I have reported a bug for nestedloop (http://llvm.org/bugs/show_bug.cgi?id=16843), and I would reported other bugs for those benchmarks whose compile time is significantly increased but without execution performance improvement.
Furthermore, you can view top 10 compiler passes when compiling with Polly as follows:
Even without optimization and code generation, Polly also increases the compile time for some benchmarks.
As shown on:
there are 10 benchmarks that require more than 10% extra compile-time overhead compared with clang.
Having to pay at most 10% slowdown to decide if Polly should be run
(including all the canonicalization) is actually not bad. Especially as
the average on normal programs is probably a lot less.
Still, if we should have a look into why this is happening for some of
the biggest slowdowns.
Can you ping me when the server is up again. I would like to see which
kernels are slowed down most.
The server is up now.
For you information, you can view top 10 compiler passes when compiled with “polly without optimization and code generation” as follows:
I have checked some benchmarks. It seems that the extra compile-time overhead is mainly resutled by the following Polly passes:
Polly - Create polyhedral description of Scops
Combine redundant instructions
Polly - Detect static control parts (SCoPs)
Induction Variable Simplification (Polly version)