ORC JIT api, object files and stackmaps

I have a few questions about the new ORC JIT.

I saw Lang Hames (hi!) excellent talk at the llvm-dev meeting a few weeks ago. The ORC JIT is undergoing some API changes and I’d like/need to take advantage of them.

(1) How do I take ownership of the ObjectFile once the ORC JIT has created it?
I’d like to take ownership of object files generated by the ORC JIT so that I can save them to disk and in later sessions reload them.
(2) How would I pass an ObjectFile saved in question#1 back to ORC so that it will relocate it and generate function pointers?
(3) How do I get access to the relocated ObjectFile sections?
Currently I subclass SectionMemoryManager and implement allocateDataSection(…)
I can get the memory for the “__llvm_stackmaps” section - but I don’t know when/if the contents have been fully set up with relocated function pointers.
(4) For the “__llvm_stackmaps” section - will I need to do any relocation to obtain the function pointers?

Background:
I’m using llvm.experimental.stackmaps to register one variable in each stack frame that contains spilled register arguments.
I’ve figured out how to get access to the stackmaps for code that I load into my system from dynamic libraries that our compiler generates.
The answers to questions above will help me get access to the stackmaps from ORC JITted code.

I think I found the answer to #3 and #4.
(a) I overloaded the SectionMemoryManager::finalizeMemory(…) method.
(b) I first call the base classes method (maybe not necessary).
(c) At this point the __llvm_stackmaps section that I saved in thread local memory when allocateDataSection was called appears to be fully set up and relocated.

Hi Christian

Your use case seems to have similar requirements as remote JITing in ORC. So far I haven’t used that part myself and I am sure Lang can tell you much more about it. However, this comment on the RemoteObjectClientLayer class sounds promising for your questions (1) and (2):

/// Sending relocatable objects to the server (rather than fully relocated
/// bits) allows JIT’d code to be cached on the server side and re-used in
/// subsequent JIT sessions.

There are a few tests here that illustrate its usage:
Note that it still uses LegacyJITSymbolResolver, so I guess it may change any time soon. Hope it helps for the time being. Cheers, Stefan

Hi Christian, Stefan,

(1) How do I take ownership of the ObjectFile once the ORC JIT has created it?
I’d like to take ownership of object files generated by the ORC JIT so that I can save them to disk and in later sessions reload them.

It depends on whether you are writing a custom JIT class or using LLJIT.

If you are writing a custom JIT class: At the moment the best way to do this is with an ObjectCache attached to your IR compiler (see llvm/ExecutionEngine/Orc/CompileUtils.h). This makes it easy to build an association between an IR input file and a compiled object file. In the future I plan to add an additional notification callback to RTDyldObjectLinkingLayer which can be used to transfer ownership of loaded objects back to the client.

If you are using LLJIT: There is no way to do this yet, but I plan to add support for ObjectCaches. Let me know if you are using LLJIT and I can do this straight away (it is an easy change).

(2) How would I pass an ObjectFile saved in question#1 back to ORC so that it will relocate it and generate function pointers?

You can add an object file directly to the RTDyldObjectLinkingLayer of your JIT class using that class’s add method.

If you are using an ObjectCache with LLJIT (once that is supported), and assuming the ObjectCache implements the getObject method, then the object will be loaded automatically when you attempt to compile the source IR for that object. If you wish to use the ObjectCache to save the object only and manage the caching outside the ObjectCache then you can use LLJIT::addObject to add the saved object.

(3) How do I get access to the relocated ObjectFile sections?
(4) For the “__llvm_stackmaps” section - will I need to do any relocation to obtain the function pointers?

Looks like you already figured this one out. :slight_smile:

To confirm: You will want to call the base class finalizeMemory method. You can do that before or after running your code: finalizeMemory will not change section contents, but will generally apply memory protections. Since this is a data section it should remain readable either way.

However, this comment on the RemoteObjectClientLayer class sounds promising for your questions (1) and (2):
/// Sending relocatable objects to the server (rather than fully relocated
/// bits) allows JIT’d code to be cached on the server side and re-used in
/// subsequent JIT sessions.

Actually I want to kill off the RemoteObjectLayer (at least as it is currently implemented). There are better solutions to the Remote JITing problem in the new API. :slight_smile:

– Lang.

Thank you both for the feedback - this is very illuminating.
I am using my own custom JIT class based on what I learned from the Kaleidoscope demo.

I will follow your advice and follow up with questions if I have any.

This is what my JIT class looks like followed by the constructor:
https://github.com/clasp-developers/clasp/blob/dev/include/clasp/llvmo/llvmoExpose.h#L4189

ClaspJIT_O::ClaspJIT_O() : TM(EngineBuilder().selectTarget()),
DL(TM->createDataLayout()),
ObjectLayer( { return std::make_shared(); },
[this](llvm::orc::RTDyldObjectLinkingLayer::ObjHandleT H,
const RTDyldObjectLinkingLayerBase::ObjectPtr& Obj,
const RuntimeDyld::LoadedObjectInfo &Info) {
this->GDBEventListener->NotifyObjectEmitted((Obj->getBinary()), Info);
save_symbol_info(
(Obj->getBinary()), Info);
}),
CompileLayer(ObjectLayer, SimpleCompiler(*TM)),
OptimizeLayer(CompileLayer,
[this](std::shared_ptr M) {
return optimizeModule(std::move(M));
}),
GDBEventListener(JITEventListener::createGDBRegistrationListener()),
ModuleHandles(_Nilcore::T_O())
{
llvm::sys::DynamicLibrary::LoadLibraryPermanently(nullptr);
}