I am new to llvm. I am looking for an example somewhere, or a
walkthrough/guide on how to do runtime optimization using llvm. Ideally, I
would like to:
1. Compile the program from C to LLVM or native with LLVM information
embedded in the binary.
2. Run the binary under LLVM's interpreter, and profile the data. I hope
LLVM has support for all this, and I don't have to insert my own
instructions for profiling.
3. A callback that gets called when a function or a basic block gets hot.
Ideally, I would like to transform this basic block and connected ones in
the graph. So ideally, I would like my function to get called if a trace or
a collection of basic blocks gets hot.
4. I would like to do some transformations in the IR. i.e. from LLVM->LLVM
transforms on the aforementioned hot regions/blocks.
5. I would then like to have control over the JIT as well. In the above
LLVM->LLVM transform, I would have placed some "special" instructions (like
maybe illegal opcodes, or something like that), and once the JIT is about to
translate those, my routine should get called. I will then transform those
instructions into "special" native instructions.
6. I then want to execute the newly written binary and remove profile
instrumentation, but leave my special native instructions intact.
I would like to know if there is such an example in the LLVM package. If
not, where in the cpp files should I begin hacking to do each of these