Replication -O3 optimizations manually

Hello llvm-dev,

my name is Emanuele and I am an intern in ARM. As part of the project I am doing here, I would like to manually replicate the optimizations that LLVM applies when I type -O3. In other words, I would like to know what are the compilation flags/passes that -O3 triggers.

I noticed that GCC reports, on its website, all the flags that are enforced by -O3 (https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html), but I wasn’t able to find something similar within LLVM documentation. On the other hand, I found that this command displays all the optimization passes applied by opt when -O3 flag is on:

llvm-as < /dev/null | opt -O3 -disable-output -debug-pass=Arguments

I tried to apply the same optimization passes through opt, but, even though the performance are similar, the resulting binary is slower than the one generated using -O3 (also the binaries differ, of course).

Again, I found this other command that does something similar (it lists the sequence of optimization passes applied):

clang -O3 -mllvm -debug-pass=Arguments file.c

In this case, the performance are still different and some of the optimization passes listed in the last block of passes (e.g. -machinemoduleinfo, -stack-protector, etc.) are unknown to opt.

Said that, my question is: how can I find out what optimization passes/flags -O3 enforces in order to manually apply the same optimizations and have, hopefully, the same binary and performance?

I am currently using LLVM version 5.0.2.

Thank you for both your help and your time!

Best regards

Emanuele

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

Hello Emanuele,

When you provide the optimization level -O3 to the clang driver, it does not simply schedule a sequence of passes to be run on the intermediate representation.
Indeed, it schedules target-independent and target-dependent passes.
Moreover, IIRC, the optimization level is also used in the later stages of the code generation to apply target-dependent optimizations (i.e. vectorizer).

The most common use case when someone wants to test its own pass/work within the LLVM toolchain is the following

  • use clang to generate a LLVM-IR file
  • use opt to run your desired pass / pass sequence and output another LLVM-IR file
  • use clang -O3 to compile to executable machine code

However, with this approach you will run the passes on the LLVM-IR twice.
There are use cases when this could invalidate your results.
As opt stops at LLVM-IR level, I would suggest you to use also other LLVM tools to run individually the backend stages / sequence of passes which cannot be run by opt (such as llc / llvm-mc).
An extensive list of tools/commands you can use is available at [0].
For your specific case, I would suggest you to have a look at this restricted schema [1].

Yet there is another way to get into even fine grain detail.
You can check which are the clang DriverActions you are running with a given command line. See [2].

From that point you can rebuild the exact whole sequence of commands that the clang driver triggers.

If you can provide more details about what is your use case (measure performance, pass development and testing, flag selection, phase ordering), we can suggest the most suitable approach.

Kind regards,

Stefano Cherubin

[0] http://llvm.org/docs/CommandGuide/
[1] https://github.com/skeru/LLVM-intro/blob/master/img/03/toolchain.pdf
[2] https://clang.llvm.org/docs/DriverInternals.html#driver-stages

Hi, Stefano
I also have the problem as described by Emanuele. You say that clang schedules target-independent and target-dependent passes.
However, when I use lli to execute bitcode generated by opt with -O3 or with the same optimization passes as -O3, the performance are still different.
So, are there some special operations by -O3 option? I read the source code of opt, but I cannot find the reason.

Best regards
Zide

Hi Stefano,

first of all, thank you for your reply!

Here the compilation steps I am currently applying:

  1. clang main.c -Xclang -disable-O0-optnone -fomit-frame-pointer -Xclang -vectorize-loops -Xclang -vectorize-slp -momit-leaf-frame-pointer -S -emit-llvm -o main.ll
  2. opt main.ll $myPasses -o main.bc
  3. llc main.bc -o main.s
  4. clang -c main.s
  5. clang main.o -lm -o exe -mno-relax-all

With respect to each step:

  1. I use -disable-O0-optnone because I noticed that, when I do not use any -Olevel flag (I leave the default -O0), opt ignores most of the optimization passes I provide.

Moreover, using the following command:

clang file.c -xc -O3 -o /dev/null -###

I noticed that some clang activates some optimization flags (e.g. -vectorize-loops, -vectorize-slp), that, otherwise, are not enabled by -O0

  1. $myPasses contains all the optimization passes extracted using this command:

llvm-as < /dev/null | opt -O3 -disable-output -debug-pass=Arguments

I also tried using the passes provided by this command:

clang -O3 -mllvm -debug-pass=Arguments file.c

but, as I said before, some of the optimization passes generate an error

  1. -mno-relax-all comes again from the fact that -mrelax-all is enabled using -O0, while it is not using -O3.

I am currently measuring the performance in terms of execution time. I am using a suite of benchmarks that automatically provides the execution time of its benchmarks. In some cases, my execution times are close to the ones produced using -O3, while, in order cases, my results are definitely worse.

I also tried to compared the IR generated by step 2) against the one generated by the foloowing command:

clang main.c -S -emit-llvm

and, of course, they are different. I noticed that the latter also contains more metadata.

Am I missing any optimizations?

Thank you for your help!

Best regards

Emanuele Del Sozzo

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

Hi Zide,

the scope of opt is limited to the LLVM-IR, which is meant to be always target independent.
In order to apply backend optimizations you need to lower the representation to something closer to the machine-level.
I would suggest you to measure performance on machine code, not LLVM-IR.

To this end, please refer to the setup Emanuele is using.

However, I may not have properly understood your test.

lli is the LLVM-IR interpreter and it is meant more for functional testing rather than performance testing.
Are you comparing the performance of machine code generated by clang -O3 against the performance of lli optimized_IR.bc ?

Best regards,

Stefano Cherubin

Hi, Stefano
I also have the problem as described by Emanuele. You say that clang schedules target-independent and target-dependent passes.
However, when I use lli to execute bitcode generated by opt with -O3 or with the same optimization passes as -O3, the performance are still different.
So, are there some special operations by -O3 option? I read the source code of opt, but I cannot find the reason.

Best regards
Zide

Hi Emanuele,

The first thing I would highlight in your compilation flow is the absence of any optimization level in the compilation step 3.
If your goal is to compare code and performance against clang -O3, you would probably need to add -O3 also to the llc command.

I honestly don’t know on which other steps the optimization level of clang may impact.
If the difference, after the aforementioned fix is not negligible, I can suggest you to replace step 3,4,5 with a simpler
clang -O3 main.bc -o exe -lm

Best regards,

Stefano Cherubin

Hi Zide,

I think I found the right way to reach my goal.

I used the following command:

clang -O3 -Xclang -disable-llvm-optzns main.c -S -emit-llvm -o main.ll

to generate an IR file enriched by all the metadata that otherwise wouldn’t be generated with -O0. Moreover, -disable-llvm-optzns flag ensures that none of the optimization passes has been applied yet to the IR.

In this way, I can replicate -O3 result by applying the optimization passes using opt. Apparently, those metadata are necessary to fully optimize the code.

I hope that this may help you too.

Best regards

Emanuele Del Sozzo

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

Hi Emanuele,
Thank you for your replay!
I cannot replicate -O3 result using LLVM 6.0 with the command you provide. Actually, I previously use the following command
clang -O3 -Xclang -disable-llvm-passes -S -emit-llvm main.c -o main.ll to generate the IR file, which is equal to your command.
Currently, I want to test the passes in LLVM. The performance of the pass or passes sequence is considered, so I choose the performance
of -O3 as a baseline.

The experiment steps are as following:

  1. clang -O3 -Xclang -disable-llvm-passes -S -emit-llvm main.c -o main.ll

2.a. opt -O3 main.ll -o main-opt1.ll
2.b. opt (the same passes sequence as O3) main.ll -o main-opt2.ll

3.a llc main-opt1.ll -o main-opt1.s
3.b llc main-opt2.ll -o main-opt2.s

4.a clang main-opt1.s -o main-opt1
4.b clang main-opt2.s -o main-opt2

$ time ./main-opt1
real 0m0.846s
user 0m0.845s
sys 0m0.001s

$ time ./main-opt2
real 0m0.956s
user 0m0.956s
sys 0m0.001s

where the same passes sequence is generated by the following command:
llvm-as < /dev/null | opt -O3 -disable-output -debug-pass=Arguments

From the results, we can see the execution time of main-opt2 is 13% lower than that of the main-opt1.
As Stefano said, clang schedules target-independent and target-dependent passes. So I use lli to execute main-opt1.ll and main-opt2.ll
to reduce the influence of target-dependent passes, the results are the same as the above.
$ time lli main-opt1.ll

real 0m0.878s
user 0m0.878s
sys 0m0.000s

$ time lli main-opt2.ll

real 0m0.978s
user 0m0.978s
sys 0m0.000s

Thus, for my purpose, if I cannot get the same results using -O3 and the passes sequence as -O3, respectively, I cannot say
that the performance comparisons between other passes sequence and -O3 are fair.

I do not know whether I make some mistakes.

In addition, I find that the passes sequences “-early-cse-memssa -lcssa-verification -early-cse-memssa”,
“-early-cse-memssa -verify -early-cse-memssa”, “-early-cse-memssa -demanded-bits -early-cse-memssa” and “-early-cse-memssa -early-cse-memssa”
will cause the following error for LLVM version 6.0.0.
LLVMSymbolizer: error reading file: No such file or directory
#0 0x0000000001a68794 (opt+0x1a68794)
#1 0x0000000001a68a76 (opt+0x1a68a76)
#2 0x00007f96a098c390 __restore_rt (/lib/x86_64-linux-gnu/libpthread.so.0+0x11390)
#3 0x00000000015fc64e (opt+0x15fc64e)
#4 0x000000000160065d (opt+0x160065d)
#5 0x00000000015fdb08 (opt+0x15fdb08)
#6 0x000000000075aaa6 (opt+0x75aaa6)
#7 0x00007f969f924830 __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x20830)
#8 0x000000000074c1b9 (opt+0x74c1b9)
Stack dump:
0. Program arguments: opt -early-cse-memssa -lcssa-verification -early-cse-memssa main.bc -o main-opt.bc
Segmentation fault (core dumped)

If possible, please try these passes sequences in your system using LLVM version 5.0.2. If these sequences also cause the same error in your system,
it could be a bug for LLVM.

Thank you for both your help and your time!

Best regards
Zide

Hi Zide,

thank you for the more detailed clarification of your setup.
As far as I can see, the only difference between your a) steps and b) steps lies in the sequence of passes you provide them.
Obviously the sequence of passes you get from [1] is not the same as the sequence of passes opt runs on your main.ll
I suggest you to use the same command as [1] with the actual input of your benchmark as input for opt. See [2].

I crafted a main.c file with an empty main and I run your step 1 to generate the LLVM-IR.
I compared the output of [1] against the output of [2] and I noticed a small difference.
Indeed, [2] schedules one more pass (-targetpassconfig) with respect to [1].

I also tried to craft another main.ll file via llvm-stress (LLVM utility to generate random valid LLVM-IR test files).
In this case [1] and [2] do not differ.

I guess the difference of scheduled passes is due to source code metadata which have been transferred to the IR from the source file.
I suspect that if you feed [2] with a full benchmark (I don’t know which one you are using) you may get a slightly different sequence of passes.

I hope this could help you.

Best regards,

Stefano Cherubin

[1] llvm-as < /dev/null | opt -O3 -disable-output -debug-pass=Arguments
[2] opt -O3 -disable-output -debug-pass=Arguments main.ll

Hi Emanuele,
Thank you for your replay!
I cannot replicate -O3 result using LLVM 6.0 with the command you provide. Actually, I previously use the following command
clang -O3 -Xclang -disable-llvm-passes -S -emit-llvm main.c -o main.ll to generate the IR file, which is equal to your command.
Currently, I want to test the passes in LLVM. The performance of the pass or passes sequence is considered, so I choose the performance
of -O3 as a baseline.

The experiment steps are as following:

  1. clang -O3 -Xclang -disable-llvm-passes -S -emit-llvm main.c -o main.ll

2.a. opt -O3 main.ll -o main-opt1.ll
2.b. opt (the same passes sequence as O3) main.ll -o main-opt2.ll

3.a llc main-opt1.ll -o main-opt1.s
3.b llc main-opt2.ll -o main-opt2.s

4.a clang main-opt1.s -o main-opt1
4.b clang main-opt2.s -o main-opt2

$ time ./main-opt1
real 0m0.846s
user 0m0.845s
sys 0m0.001s

$ time ./main-opt2
real 0m0.956s
user 0m0.956s
sys 0m0.001s

where the same passes sequence is generated by the following command:
llvm-as < /dev/null | opt -O3 -disable-output -debug-pass=Arguments

From the results, we can see the execution time of main-opt2 is 13% lower than that of the main-opt1.
As Stefano said, clang schedules target-independent and target-dependent passes. So I use lli to execute main-opt1.ll and main-opt2.ll
to reduce the influence of target-dependent passes, the results are the same as the above.
$ time lli main-opt1.ll

real 0m0.878s
user 0m0.878s
sys 0m0.000s

$ time lli main-opt2.ll

real 0m0.978s
user 0m0.978s
sys 0m0.000s

Thus, for my purpose, if I cannot get the same results using -O3 and the passes sequence as -O3, respectively, I cannot say
that the performance comparisons between other passes sequence and -O3 are fair.

I do not know whether I make some mistakes.

In addition, I find that the passes sequences “-early-cse-memssa -lcssa-verification -early-cse-memssa”,
“-early-cse-memssa -verify -early-cse-memssa”, “-early-cse-memssa -demanded-bits -early-cse-memssa” and “-early-cse-memssa -early-cse-memssa”
will cause the following error for LLVM version 6.0.0.
LLVMSymbolizer: error reading file: No such file or directory
#0 0x0000000001a68794 (opt+0x1a68794)
#1 0x0000000001a68a76 (opt+0x1a68a76)
#2 0x00007f96a098c390 __restore_rt (/lib/x86_64-linux-gnu/libpthread.so.0+0x11390)
#3 0x00000000015fc64e (opt+0x15fc64e)
#4 0x000000000160065d (opt+0x160065d)
#5 0x00000000015fdb08 (opt+0x15fdb08)
#6 0x000000000075aaa6 (opt+0x75aaa6)
#7 0x00007f969f924830 __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x20830)
#8 0x000000000074c1b9 (opt+0x74c1b9)
Stack dump:
0. Program arguments: opt -early-cse-memssa -lcssa-verification -early-cse-memssa main.bc -o main-opt.bc
Segmentation fault (core dumped)

If possible, please try these passes sequences in your system using LLVM version 5.0.2. If these sequences also cause the same error in your system,
it could be a bug for LLVM.

Thank you for both your help and your time!

Best regards
Zide