Mesa (http://www.mesa3d.org/) uses LLVM to compile shaders. These are
typically small bits of code (~10KB) and one application can use many
of them. Mesa creates an ExecutionEngine with a default JIT memory
manager for each shader it compiles, and keeps the engine around as
long as the shader code is needed. This results in memory waste of
~1MB for each shader. Half the overhead is in the memory manager
which allocates 512KB even if only a few KB is used, and half is in
the engine, which we can't delete because doing so destroys the memory
manager and the code within it.
A couple solutions:
1) Get a copy of the code then delete the engine. This might be
doable with MCJIT, but seemingly not with JIT which Mesa wants to use.
2) A new memory manager with less overhead and which also doesn't
delete the code when killed by the engine.
I've got solution 2 working, but LLVM could make it easier. Deriving
from JITMemoryManager seems complicated - there are a lot of methods
to implement. Deriving from DefaultJITMemoryManager is not possible
because it's in an anonymous namespace. So I made a manager that
delegates everything to a shared default memory manager. This
eliminates overhead by packing everything into one shared manager, and
when a delegating manager gets killed the shared manager persists, so
it's safe to delete the engines.
Below is a generic delegating memory manager. Using this I only
needed to override a couple methods. Maybe it could be useful to
others. See the thread starting with
Note how many ifdefs were needed to work with all versions from 3.1 to
now. If this were added to the LLVM project it could be maintained in
one place with no ifdefs. If the LLVM maintainers don't want it, I
also have a patch to allow subclassing DefaultJITMemory manager.
Would that be more agreeable? Thanks.

class DelegatingJITMemoryManager : public llvm::JITMemoryManager {

      virtual llvm::JITMemoryManager *mgr() const = 0;

       * From JITMemoryManager
      virtual void setMemoryWritable() {
         return mgr()->setMemoryWritable();
      virtual void setMemoryExecutable() {
         return mgr()->setMemoryExecutable();
      virtual void setPoisonMemory(bool poison) {
         return mgr()->setPoisonMemory(poison);
      virtual void AllocateGOT() {
          * isManagingGOT() is not virtual in base class so we can't delegate.
          * Instead we mirror the value of HasGOT in our instance.
         HasGOT = mgr()->isManagingGOT();
      virtual uint8_t *getGOTBase() const {
         return mgr()->getGOTBase();
      virtual uint8_t *startFunctionBody(const llvm::Function *F,
                                         uintptr_t &ActualSize) {
         return mgr()->startFunctionBody(F, ActualSize);
      virtual uint8_t *allocateStub(const llvm::GlobalValue *F,
                                    unsigned StubSize,
                                    unsigned Alignment) {
         return mgr()->allocateStub(F, StubSize, Alignment);
      virtual void endFunctionBody(const llvm::Function *F,
                                   uint8_t *FunctionStart,
                                   uint8_t *FunctionEnd) {
         return mgr()->endFunctionBody(F, FunctionStart, FunctionEnd);
      virtual uint8_t *allocateSpace(intptr_t Size, unsigned Alignment) {
         return mgr()->allocateSpace(Size, Alignment);
      virtual uint8_t *allocateGlobal(uintptr_t Size, unsigned Alignment) {
         return mgr()->allocateGlobal(Size, Alignment);
      virtual void deallocateFunctionBody(void *Body) {
         return mgr()->deallocateFunctionBody(Body);
#if HAVE_LLVM < 0x0304
      virtual uint8_t *startExceptionTable(const llvm::Function *F,
                                           uintptr_t &ActualSize) {
         return mgr()->startExceptionTable(F, ActualSize);
      virtual void endExceptionTable(const llvm::Function *F,
                                     uint8_t *TableStart,
                                     uint8_t *TableEnd,
                                     uint8_t *FrameRegister) {
         return mgr()->endExceptionTable(F, TableStart, TableEnd,
      virtual void deallocateExceptionTable(void *ET) {
      virtual bool CheckInvariants(std::string &s) {
         return mgr()->CheckInvariants(s);
      virtual size_t GetDefaultCodeSlabSize() {
         return mgr()->GetDefaultCodeSlabSize();
      virtual size_t GetDefaultDataSlabSize() {
         return mgr()->GetDefaultDataSlabSize();
      virtual size_t GetDefaultStubSlabSize() {
         return mgr()->GetDefaultStubSlabSize();
      virtual unsigned GetNumCodeSlabs() {
         return mgr()->GetNumCodeSlabs();
      virtual unsigned GetNumDataSlabs() {
         return mgr()->GetNumDataSlabs();
      virtual unsigned GetNumStubSlabs() {
         return mgr()->GetNumStubSlabs();

       * From RTDyldMemoryManager
      virtual uint8_t *allocateCodeSection(uintptr_t Size,
                                           unsigned Alignment,
                                           unsigned SectionID) {
         return mgr()->allocateCodeSection(Size, Alignment, SectionID);
#if HAVE_LLVM >= 0x0303
      virtual uint8_t *allocateDataSection(uintptr_t Size,
                                           unsigned Alignment,
                                           unsigned SectionID,
                                           bool IsReadOnly) {
         return mgr()->allocateDataSection(Size, Alignment, SectionID,
      virtual void registerEHFrames(llvm::StringRef SectionData) {
         return mgr()->registerEHFrames(SectionData);
      virtual uint8_t *allocateDataSection(uintptr_t Size,
                                           unsigned Alignment,
                                           unsigned SectionID) {
         return mgr()->allocateDataSection(Size, Alignment, SectionID);
      virtual void *getPointerToNamedFunction(const std::string &Name,
                                              bool AbortOnFailure=true) {
         return mgr()->getPointerToNamedFunction(Name, AbortOnFailure);
#if HAVE_LLVM == 0x0303
      virtual bool applyPermissions(std::string *ErrMsg = 0) {
         return mgr()->applyPermissions(ErrMsg);
#elif HAVE_LLVM > 0x0303
      virtual bool finalizeMemory(std::string *ErrMsg = 0) {
         return mgr()->finalizeMemory(ErrMsg);

Hi Frank,

The project really needs to be looking to move away from the old JIT and to MCJIT. LLVM is actively working to kill the old JIT. It’s already unmaintained. MCJIT is the way forward. Can you elaborate on what’s blocking its adoption for Mesa?


I'll try to find out, or get someone to explain, why Mesa selects
MCJIT with LLVM 3.1 only and JIT for other LLVM versions. I'm not
keen to code a fourth attempt (1: copy JIT code, 2: delegating manger,
3: derive from DefaultJITMemoryManager, 4: copy MCJIT code) but I'll
try copying code with MCJIT. Is that the usual route for people who
want to delete all LLVM engines, etc. while keeping the generated
In any case, my points on the difficulty of creating a
JITMemoryManager apply equally to JIT or MCJIT. Maybe few people care
because most are happy with the default manager? I might be too if I
could change the allocation unit (down from 512KB) and if I could
delete the engine without losing the code. So there's a third
proposal - to sum up:
1) delegating memory manager (code provided in my previous post)
2) de-anonymize default memory manager (I've written this patch too)
3) make default memory manager more flexible

Hi Frank,

The default memory manager for MCJIT (SectionMemoryManager) isn't hidden in the way that the DefaultJITMemoryManager is, so you should be able to derive from it as needed. Also, because MCJIT emits code in sections, it should be fairly straightforward to customize the memory manager to release ownership of the memory before it is deleted.

Take a look and let me know if it has any shortcomings that prevent you from doing what you want to do.


From: llvmdev-bounces@cs.uiuc.edu [mailto:llvmdev-bounces@cs.uiuc.edu]
On Behalf Of Frank Henigman
Subject: Re: [LLVMdev] JITMemoryManager

In any case, my points on the difficulty of creating a
JITMemoryManager apply equally to JIT or MCJIT.

Not sure that's true. Extending SectionMemoryManager to use our own allocator proved to be relatively easy for us; it ended up being about 250 lines of code, about of third of which were for logging/debug.

- Chuck

Mesa can use either MCJIT or the old JIT. It prefers the old JIT whenever available because that's what worked better when I looked at it, during LLVM 3.1 development, where MCJIT was bleeding edge. It seemed safer then to keep using JIT (fewer surprises, especially in term of memory footprint/leaks), and after backporting the AVX encoding support there was no real advantage of MCJIT from an external user POV.

I'll revisit this next time we upgrade LLVM version. Currently we are still finishing the migration from LLVM 2.6 to 3.1. The problem with upgrading LLVM is that whenever we do, there is always this deep pain to fix all the memory leaks that arise when using LLVM as a JIT compiler. These are not the ordinary leaks (e.g., lack of free or dangling pointers), but rather the "always growing pools of objects" sort of leaks. For example, although I don't have details handy, one of the leaks I found was having a cache of objects (sort sort of MC symbols IIRC), where the objects have an unique incremental id. So after a while the cache is full of objects that will never be used again...