LLVMContext: Threads and Ownership.

Hi All,

ORC’s new concurrent compilation model generates some interesting lifetime and thread safety questions around LLVMContext: We need multiple LLVMContexts (one per module in the simplest case, but at least one per thread), and the lifetime of each context depends on the execution path of the JIT’d code. We would like to deallocate contexts once all modules associated with them have been compiled, but there is no safe or easy way to check that condition at the moment as LLVMContext does not expose how many modules are associated with it.

One way to fix this would be to add a mutex to LLVMContext, and expose this and the module count. Then in the IR-compiling layer of the JIT we could have something like:

// Compile finished, time to deallocate the module.
// Explicit deletes used for clarity, we would use unique_ptrs in practice.
auto &Ctx = Mod->getContext();
delete Mod;
std::lock_guardstd::mutex Lock(Ctx->getMutex());
if (Ctx.getNumModules())
delete Ctx;

Another option would be to invert the ownership model and say that each Module shares ownership of its LLVMContext. That way LLVMContexts would be automatically deallocated when the last module using them is destructed (providing no other shared_ptrs to the context are held elsewhere).

There are other possible approaches (e.g. side tables for the mutex and module count) but before I spent too much time on it I wanted to see whether anyone else has encountered these issues or has opinions on solutions.

Cheers,
Lang.

Actually, looking at the destructors for LLVMContext and Module I do not think the current ownership scheme makes sense, so this might be a good opportunity to re-think it.

Right now an LLVMContext owns a list of modules (see LLVMContextImpl::OwnedModules) that it destroys when its destructor is called. Modules remove themselves from this list if they are destructed before the context:

Module::~Module() {
Context.removeModule(this);

LLVMContextImpl::~LLVMContextImpl() {
// NOTE: We need to delete the contents of OwnedModules, but Module’s dtor
// will call LLVMContextImpl::removeModule, thus invalidating iterators into
// the container. Avoid iterators during this operation:
while (!OwnedModules.empty())
delete *OwnedModules.begin();

This makes it unsafe to hold a unique_ptr to a Module: If any Module is still alive when its context goes out of scope it will be double freed, first by the LLVMContextImpl destructor and then again by the unique ptr. Idiomatic scoping means that we tend not to see this in practice (Module takes an LLVMContext reference, meaning we always declare the context first, so it goes out of scope last), but makes the context ownership redundant: the modules are always freed first via their unique_ptr’s.

I don’t think it makes sense for LLVMContext to own Modules. I think that Modules should share ownership of their LLVMContext via a shared_ptr.

Thoughts?

Cheers,
Lang.

Agreed, the existing ownership seems sub-optimal. I wouldn’t say broken, but subtle at least - looks like you get the choice to either manage the ownership of the Module object yourself, or let the context handle it (eg: currently it’d be valid to just do “{ LLVMContext C; new Module(C); new Module(C); }” - Modules end up owned by the context and cleaned up there).

Might be hard to migrate existing users away from this without silently introducing memory leaks… maybe with some significant API breakage - move Module construction to a factory/helper that returns a std::unique_ptr - requiring every Module construction to be revisited, and users relying on LLVMContext based ownership/cleanup to redesign their code.

As to the original question - gut reaction: this doesn’t seem like something that’s general-purpose enough to be implemented in the LLVMContext/Module itself. I think a reasonable ownership model for LLVMContext/Module is that the user is required to ensure the LLVMContext outlives all Modules created within it (same way a user of std::vector is required to ensure that the vector is not reallocated so long as they’re keeping pointers/references to elements in it). I’d think/suggest ref-counted LLVMContext ownership would be done by wrapping/external tracking in this use case.

  • Dave

Hi Lang,

We would like to deallocate contexts

I did not look at the Orc APIs as of LLVM-6+, but I'm curious where this requirement comes from?
Does Orc takes ownership of the modules that are being JIT'ed? I.e., it is the same 'ownership model' as MCJIT has, am I right?

I think the JIT users should take care of memory allocation/deallocation. Also, I believe that you have strong reasons to implement things this way :slight_smile:
Could you please tell us more about the topic?

I’d think/suggest ref-counted LLVMContext ownership would be done by wrapping/external tracking in this use case.

What kind of wrapping are you imagining? I’m worried that maintaining an external context ref-count is going to be awkward and error prone, but perhaps there are idioms that would make this easier than I am imagining.

I think it may be simpler and safer to expose the number of Modules managed by an LLVMContext. That way the “ref-count” always reflects the number of modules using the context, and tracking can be set up and managed entirely from the LLVMContext creation site (by adding the context to an ‘automatically managed’ set), rather than intruding to every Module creation point.

We would still need to add a mutex to LLVMContext to make this thread safe though.

– Lang.

In the most basic case, I’d imagine something like this:

auto C = std::make_shared();
struct ModuleAndSharedContextDeleter { std::shared_ptr C; operator()(Module M) { delete M; } / ctor to init C */};
std::unique_ptr<Module, ModuleAndSharedDeleter> M(new Module(C.get()), ModuleAndSharedContextDeleter(C));

(or invert this and traffic in structs that have a unique_ptr and a shared_ptr as members - though the above example has the advantage that it mostly looks like unique_ptr, and you can change ownership (pass the unique_ptr<Module, CustomDeleter> to a shared_ptr and it’ll correctly propagate the deleter, etc))

But it sounds like you’re concerned with a situation in which there are a wide variety of things making Modules that are maybe outside the purview of Orc but need this ownership model? That would seem unfortunate & I’m not quite sure I’m picturing the situation you have in mind.

auto C = std::make_shared();
struct ModuleAndSharedContextDeleter { std::shared_ptr C; operator()(Module M) { delete M; } / ctor to init C */};
std::unique_ptr<Module, ModuleAndSharedDeleter> M(new Module(C.get()), ModuleAndSharedContextDeleter(C));

(or invert this and traffic in structs that have a unique_ptr and a shared_ptr as members - though the above example has the advantage that it mostly looks like unique_ptr, and you can change ownership (pass the unique_ptr<Module, CustomDeleter> to a shared_ptr and it’ll correctly propagate the deleter, etc))

I believe this would work, but it certainly has a suboptimal feel to it.

For what it is worth, the C API contains some helpful documentation (see include/llvm-c/LLVMContext.h):

/**

  • Destroy a module instance.

auto C = std::make_shared();
struct ModuleAndSharedContextDeleter { std::shared_ptr C; operator()(Module M) { delete M; } / ctor to init C */};
std::unique_ptr<Module, ModuleAndSharedDeleter> M(new Module(C.get()), ModuleAndSharedContextDeleter(C));

(or invert this and traffic in structs that have a unique_ptr and a shared_ptr as members - though the above example has the advantage that it mostly looks like unique_ptr, and you can change ownership (pass the unique_ptr<Module, CustomDeleter> to a shared_ptr and it’ll correctly propagate the deleter, etc))

I believe this would work, but it certainly has a suboptimal feel to it.

If you just want to use shared_ptr everywhere (for both the context and the modules), the complexity would leak into fewer APIs, because shared_ptr type erases its deleter.

But yeah, feels weird to me to change the ownership of LLVMContext for this one use case - if there were more users that wanted to have it, I could see building it in in a more convenient way (but maybe still in the form of a typedef common for unique_ptr<Module, CustomDeleter>, built-in factory functions/helpers/etc to make that more mainstream).

Also would come back to some discussion (maybe easier in person or over IRC, etc, not sure) about whether there’s a way to design things so shared ownership isn’t needed even for ORC - I don’t have enough info to fully understand where/how it’d be used here/whether there’s some good alternatives.

For what it is worth, the C API contains some helpful documentation (see include/llvm-c/LLVMContext.h):

/**

  • Destroy a module instance.
  • This must be called for every created module or memory will be
  • leaked.
    */
    void LLVMDisposeModule(LLVMModuleRef M);

So based on the C-API’s documentation we are at least free to remove this strange fallback-ownership model of LLVMContext.

Based on the documentation, yeah - though I’d expect it’d still break some folks - and it’s the C++ API users I’d expect we have more of at this level & likely to introduce some quiet leaks.

Be nice to fail hard (though we don’t have much way to do that - assert fail, but that’s only in assert builds, etc) - could keep the list of modules around for a while & assert if it’s non-empty when the context is destroyed. Not perfect, but at least it’s something.

  • Dave

If you just want to use shared_ptr everywhere (for both the context and the modules), the complexity would leak into fewer APIs, because shared_ptr type erases its deleter.

I do not think there’s a motivation for changing Module’s ownership, but owning LLVMContext by a shared_ptr makes sense if you think of LLVMContext as a ‘pool’ data structure that can be referred to by multiple Modules, rather than as a container for modules.

But yeah, feels weird to me to change the ownership of LLVMContext for this one use case

I think of this discussion as being about the “right” ownership model for LLVMContext, with this use case as a motivating example of the problems with the current model. The question to ask is: If we were designing this ownership model over, would we do it the same way? Of course it is reasonable to answer: “No we would not, but the cost of switching to a more sensible model now is prohibitive”.

Also would come back to some discussion (maybe easier in person or over IRC, etc, not sure) about whether there’s a way to design things so shared ownership isn’t needed even for ORC.

When it comes to LLVMContext we already have the following constraints: (1) Multiple modules must be able to refer to the same LLVMContext, (2) Modules must never out-live their context. If you add the desire to de-allocate contexts once they’re no longer referenced then any scheme you come up with is going to be shared-ptr by another name/mechanism.

Based on the documentation, yeah - though I’d expect it’d still break some folks - and it’s the C++ API users I’d expect we have more of at this level & likely to introduce some quiet leaks.

On its own I don’t think potential C++ API leaks are much of a barrier: As you said, we could change the construction API to always return unique_ptr if that is the model we wanted to enforce.

Be nice to fail hard (though we don’t have much way to do that - assert fail, but that’s only in assert builds, etc) - could keep the list of modules around for a while & assert if it’s non-empty when the context is destroyed. Not perfect, but at least it’s something.

If we were to switch to shared ownership of contexts then the invariant could never be broken, at the expense of some predictability about the lifetimes: Scenarios that would previously have been a violation of contract would now implicitly extend the LLVMContext lifetime. That said, if you’re still holding a Module and doing something with it, I’m 99% confident that what you really wanted was for its context to remain alive.

Cheers,
Lang.

Hi Lang,

Thank you for clarification.

Do you have a use-case for manually managing LLVM module lifetimes?

Our[1] initial idea was to feed all the modules to a JIT engine, run some code,
then do further analysis and modification of the modules, feed the new versions
to a JIT engine and run some code again.

It didn't work well with MCJIT because we did not want to give up the ownership
of the modules.
However, this initial approach did not work properly because, as you mentioned,
compilation does modify modules, which does not suit our need.

Eventually, we decide to compile things on our own and communicate with the
Orc JIT using only object files. Before compilation we make a clone of a module
(simply by loading it from disk again), using the original module only for analysis.
This way, we actually have two copies of a module in memory, with the second copy
disposed immediately after compilation.
It also worth mentioning that our use case is AOT compilation, not really lazy JIT.
As of the object files we also cannot give up on the ownership because we have
to reuse them many times. At least this is the current state, though I'm working on
another improvement: to load all object files once and then re-execute program
many times.

Also, we wanted to support several versions of LLVM. Because of drastic
changes in the Orc APIs we decided to hand-craft another JIT engine based on your
work. It is less powerful, but fits our needs very well:

    https://github.com/mull-project/mull/blob/master/include/Toolchain/JITEngine.h
    https://github.com/mull-project/mull/blob/master/lib/Toolchain/JITEngine.cpp

We also parallelize lots of things, but this could also work using Orc given that we
only use the ObjectLinkingLayer.

P.S. Sorry for giving such a chaotic explanation, but I hope it shows our use case :slight_smile:

[1] https://github.com/mull-project/mull