Designing on-disk ObjectCache for JIT

Hi Lang, LLVM,

Do you have any suggestions about implementing on-disk ObjectCache for MCJIT/Orc? Anyone is able to point me to / share some existing code?

I also noticed that recently “cache pruning” support was added to LLVM (ported from clang) [1]. Is it good idea to extent it to also manage saving and loading objects to disk and create full-featured on-disk cache engine?

[1] https://reviews.llvm.org/D18422

  • Paweł

Hi,

Hi Lang, LLVM,

Do you have any suggestions about implementing on-disk ObjectCache for
MCJIT/Orc? Anyone is able to point me to / share some existing code?

You can find what we did for llvmlite and Numba here.

There's a low-level C shim in llvmlite, wrapping the C++ APIs to make
them callable using ctypes from Python:
https://github.com/numba/llvmlite/blob/master/ffi/executionengine.cpp#L162-L247

Then there's a Python wrapper around that in llvmlite, to make the API
handle regular Python objects:
https://github.com/numba/llvmlite/blob/master/llvmlite/binding/executionengine.py#L149-L198

Then there's a higher-level abstraction in Numba that works around the
callback-oriented nature of the LLVM API, and also handles additional
tasks such as finalizing the module after loading the object code:
https://github.com/numba/numba/blob/master/numba/targets/codegen.py#L236-L349

I'm afraid this code is quite a mouthful :slight_smile:

Regards

Antoine.

Hi Pawel,

Do you have any suggestions about implementing on-disk ObjectCache for MCJIT/Orc? Anyone is able to point me to / share some existing code?

There is a demo implementation in llvm/tools/lli/lli.cpp, but I don’t think it goes beyond proof-of-concept.

I also noticed that recently “cache pruning” support was added to LLVM (ported from clang) [1]. Is it good idea to extent it to also manage saving and loading objects to disk and create full-featured on-disk cache engine?

I can imagine LTO and the JIT sharing some caching infrastructure. I haven’t had time to work on the caching problem yet, but I’d be very happy to see more in-tree support for it.

Cheers,
Lang.