I am planning to begin a project to explore the space of tuning LLVM
internals in an effort to increase performance. I am wondering if
anyone can point to me any parameterizations, heuristics, or
priorities functions within LLVM that can be tuned/adjusted. So far,
I'm considering BranchProbabilityInfo and InlineCost. Does anyone have
any other suggestions?
a while ago, we had this idea of using compiler optimizations to increase the performance of verifying an app, instead of the performance of executing it. We found that there were a number of settings that had an effect on verification performance:
- The amount of loop unswitching
- The amount of loop unrolling
- The amount of function inlining
- The amount of jump threading
- Whether to favor branches or select instructions
The effect that these had on verification (in our case, exhaustive symbolic testing) was quite drastic, with speedups of 95x in some cases.
The core idea behind the work is that compilers use cost models that tell them how expensive an operation is. For verification, the costs are different. I’m writing you because these settings also have an impact en execution performance. If you find other parameters that have large effects, I’d be thrilled to hear about it.
PS: more details on our experiments: http://infoscience.epfl.ch/record/186012?ln=en
I've added a link to your paper on the LLVM publications page. Please feel free to email the list about other papers you or your group at EPFL publish that use LLVM; we love adding to the links on the Publications page.