I am incorporating JIT compilation into my project using a wrapper around LLVM. In one of my builds, the JIT process completes much, much faster than in the others, despite full release optimizations being enabled on all builds.
Here is an overview of my project:
→ LLVM is compiled as a static library.
→ This is then linked into some Rust code, which is then compiled to a static library.
→ This library is used directly in the form of a benchmark.
→ It is also used by a C++ project which provides a GUI/frontend using the JUCE framework.
Overall, I have three artifacts:
- The benchmark, which is produced by Rust’s toolchain.
- A standalone application with a GUI, which is produced by a C++ toolchain.
- A VST3 plugin (basically a shared library) with a GUI, which is produced by a C++ toolchain.
The standalone GUI artifact has significantly better performance. It completes the compilation process of a test case in ~400ms while the other two take ~11,000ms. The two GUI artifacts are produced from the same code and utilize the same build script provided by the JUCE framework. Their linker arguments are nearly identical.
The function which is taking the bulk of the time is
LLVMGetFunctionAddress which internally uses
llvm::ExecutionEngine::getFunctionAddress. The execution engine is created with
LLVMCreateJITCompilerForModule with optimization level 1, which internally uses
Where might these performance gains be coming from? Any ideas of what I could do to track down the source?