runtime performance benchmarking tools for clang

Hi All,
Could anyone point me to some good benchmarking tools to measure the runtime performance of clang compiled C++ applications.

Thanks !

  • Jyoti

Hi Jyoti,

The best benchmark is your application, and since Clang & LLVM have plenty of aggressive optimizations ( some of them may be bug-prone), it also depends on how do you want to improve the performance.

The following is some benchmarks that you could use to evaluate performance of clang.

  1. Phoronix have done some performance test using its Phoronix Test Benchmarks (http://www.phoronix-test-suite.com/ ), it includes plenty of commonly used applications. The full list of applications in Phoronix benchmark could be found here: http://openbenchmarking.org/suites/pts

  2. For industry standard performance comparison, SPEC CPU is also a good choice. You could find out more here: http://www.spec.org/cpu/ . General Purpose CPU vendors use it to show performance improvements.

  3. There are also some other small benchmarks that could test compiler performance, like polybench (http://www.cse.ohio-state.edu/~pouchet/software/polybench/ ), which focus on evaluating the loop transformation of the compiler.

Regards,
Kun Ling

Hi Kun Ling & Bergstrom,
Thanks a lot for your earlier responses. We did use the benchmarks in llvm testsuite for comparing execution time taken by clang & gcc. It appears that clang is slower than gcc for cases where floating point operations are involved and recursive calls are involved (note that pic/pie was enabled for both gcc as well as clang ).

  1. For lag in execution time due to recursive calls, it was obvious that resolving dynamic relocations via .plt indirections added to the delay. However, it was not clear as to how gcc was able to achieve it in less time than clang, when same libc.so.6 & ld-linux.so.3 were being used for both executables generated by gcc & clang executions.
    What could be the possible reason ?

  2. For lag in execution time due to floating point operations, it was clearly observed that gcc used floating point instruction FSQRT, where as clang seemed to use emulated function (?) BL SQRT.
    Note that we used the following flags for both clang as well as gcc compilation.

-march=armv7-a -mfloat-abi=softfp -mfpu=vfpv3-d16 -mtune=cortex-a8

Infact, i was surprised to see that even when " -march=armv7-a -mfloat-abi=hard -mfpu=vfpv3-d16 -mtune=cortex-a8"
was used, the code generated did not use hardware vsqrt instruction, instead there was a bl sqrt instruction.
Could someone point out why vsqrt was not emited in assembly even though softfp or ‘hard’ float-abi was specified ?

  1. Could you suggest other benchmarks specifically for floating point other than those in llvm testsuite ?
  1. For lag in execution time due to floating point operations, it was clearly observed that gcc used floating point instruction FSQRT, where as clang seemed to use emulated function (?) BL SQRT.

Note that we used the following flags for both clang as well as gcc compilation.

-march=armv7-a -mfloat-abi=softfp -mfpu=vfpv3-d16 -mtune=cortex-a8

Infact, i was surprised to see that even when " -march=armv7-a -mfloat-abi=hard -mfpu=vfpv3-d16 -mtune=cortex-a8"

was used, the code generated did not use hardware vsqrt instruction, instead there was a bl sqrt instruction.

Could someone point out why vsqrt was not emited in assembly even though softfp or ‘hard’ float-abi was specified ?

The vsqrt instruction may not be generated when automatically for platforms where math functions may set errno. Try compiling with -fno-math-errno and see if that helps.

Hi David,
Thanks for your reply.
We enabled -ffast-math which in turn adds -fno-math-errno to clang -cc1 which resulted in SQRT function being replaced with VSQRT instruction and there was an improvement ~40% seen from before for some of the TC.
Still lag exists when compared to gcc though. We are investigating that currently. Any pointers in this direction would help.
Could you suggest some benchmarks specifically for floating point ?

Thanks !
Jyoti Allur