JIT question: Inner workings of getPointerToFunction()


I was just reading through the Kaleidoscope tutorial (which is greatly written and understandable, thanks! ) hoping to get some glimpse about the workings of the JIT and the optimizations that are done at run time. I am curious as to how LLVM’s JIT dynamically generates native code from bit code at run time and runs that code (I think my question is also somewhat more general in the sense how does any JIT system translate some form of low level IR (which presumably is JIT’s data) into native code which is actually made executable at runtime). Specifically, in the following code snippet (from the tutorial), how does getPointerToFunction() actually generate native code for function LF and the call to FP succeed as if FPtr was a pointer to statically compiled code ?

// JIT the function, returning a function pointer.
void *FPtr = TheExecutionEngine->getPointerT

// Cast it to the right type (takes no arguments, returns a double) so we
// can call it as a native function.
double (FP)() = (double ()())FPtr;

I took a look at getPointerToFunction() and it seems it calls materializeFunction ( is this the run time code generator ? ) where most of the work is done. It would be great if you could point out a good starting place to understand the whole JIT’ing place in the source and relevant documentation (I read the paper about Jello). Also are there any dynamic optimizations that are currently done using the JIT ?

Thanks for your time !

  • Prakash