I run large piece of code in JIT and it runs only marginallty faster with optimization levels 1,2,3. I think differences are within the margin or error.
level user time
0 17339ms
1 16913ms
2 16891ms
3 16898ms
Level is set with builder->setOptLevel(olev);
Compilation time is excluded by taking the only top-level function address before the run with getPointerToFunction. So I measure time without getPointerToFunction.
Why optimization almost doesn't speed it up? What are the other than setOptLevel ways to control optimization?
My mindset is defined by experience with gcc, when increasing inlining levels speeds the code up significantly. Does JIT inline at all?
It's hard to believe that even only local (no inlining) optimization wouldn't bring timing down by at least 10%.
before this which outputs the pre-optimized code to the debug stream. I dump the module after running the optimization passes and it’s pretty easy to see what changes have been made to the LLVM IR. They are generally quite significant as the optimizer passes do some amazing things.
createStandardModulePasses
is the code used by llc to optimize the LLVM IR, so passing a 3 as the optimization level parameter is the equivalent to passsing -O3 to llc. It adds a ton of different optimization passes. You can find the code for it in the file opt.cpp.
One more thing. If you really want to find out what’s going on at each pass of the optimization process. Place this code:
// If this is set to 1 then the JIT engine will print out machine code
// between optimization passes.
llvm::PrintMachineCode = 0;
somewhere before running the pass manager, and it will print out the LLVM IR after every single optimization pass. It doesn’t really print out machine code but it prints out an LLVM IR assembly language dump of the module.