I am writing a compiler using LLVM 3.2 to generate native code (currently x86-64) from IR. The native code will be linked by the system linker (not a JIT).
The compiler generates calls to a run-time library to perform many operations. Therefore, each Module that I generate needs to be have declarations for all of these run-time functions added to it.
Question: is this true? I am assuming that LLVM works like a C++ compiler: before you can call a function from anywhere in a compilation unit, you need its prototype in scope.
Initially, I did this by calling Function::Create for each declaration I wanted to make. However, this is starting to “not scale”.
I also want to experiment with defining some of these library functions using LLVM IR directly. I can then have LLVM inline and optimize calls to these functions. Given that many of the arguments to the functions are constants, there is plenty of opportunity for loop unrolling and optimization.
To this end, I would like to read LLVM bitcode into an existing module. The bitcode would contain declarations for all of my library functions, plus definitions for anything I want to try to inline and optimize.
ReaderWriter provides an API for loading bitcode and returning a Module as a result. One possibility is for me to read the bitcode into a skeleton module and then have the compiler emit more code into that module. I won’t have control over the name of the module if I do this - I’m not sure if that will cause a problem down the road.
There also seems to be a mechanism for adding “library dependencies” to a Module. This suggested that perhaps I could read my bitcode into a master library module held off to the side, and have the compiler reference the master module as a library dependency in everything it generated. However, I didn’t see easily how the library mechanism worked.
What’s the most reasonable way for me to declare large numbers of functions into a module?