How to free memory of JIT'd function


I’m testing how to free memory of a JIT’d function.
I thought ExecutionEngine::freeMachineCodeForFunction() and Function::eraseFromParent()
would work and did a test with the following sample code.
But I found that the memory usage of the process is constantly growing as the while loop goes.
Could someone shed light on this please?

Here is the code.

int main(int argc, char **argv) {

LLVMContext Context;

Module M = new Module(“test”, Context);
EE = llvm::EngineBuilder(M).setEngineKind(EngineKind::JIT).create();

while (true) {
SMDiagnostic error;
ParseAssemblyString(“define i32 @factorial(i32 %X) nounwind uwtable {\n”
" %1 = alloca i32, align 4\n"
" %2 = alloca i32, align 4\n"
" store i32 %X, i32* %2, align 4\n"
" %3 = load i32* %2, align 4\n"
" %4 = icmp eq i32 %3, 0\n"
" br i1 %4, label %5, label %6\n"
“; :5 ; preds = %0\n”
" store i32 1, i32* %1\n"
" br label %12\n"
“; :6 ; preds = %0\n”
" %7 = load i32* %2, align 4\n"
" %8 = load i32* %2, align 4\n"
" %9 = sub nsw i32 %8, 1\n"
" %10 = call i32 @factorial(i32 %9)\n"
" %11 = mul nsw i32 %7, %10\n"
" store i32 %11, i32* %1\n"
" br label %12\n"
“; :12 ; preds = %6, %5\n”
" %13 = load i32* %1\n"
" ret i32 %13\n"

Function *func = M->getFunction(“factorial”);
uintptr_t tmp = (uintptr_t)(EE->getPointerToFunction(func));


delete EE;


Thank you for any help.



I put the sample code and a brief analysis using Valgrind to GitHub
in order to make my problem clear.
The Valgrind heap profiler indicates memory leaking but I don't get
what is wrong with the way to free memory.

If someone could please offer some advice/suggestion on this, I would
really appreciate it.


There may be another explanation, but I've seen this sort of issues before: LLVM uses several object pools (associated w/ LLVM context and JIT engine), and often objects from these pools are leaked, or the pools grow infinitely due to implementation bugs.

These are not an ordinary memory leaks, as destroying the LLVM context and/or JIT engine will successfully reclaim all the memory. The problem is only visible for long-running JIT processes, which repeatedly compile.

I've submitted several fixes for this in the past (against 2.6) [1], but this is very easy to regress given that valgrind memcheck will not detect this. The only way to detect is either continuously repeat compilation, and track deltas, as you've done; or tweak the source to use ordinay allocators instead of pool allocators (but this can't be done blindly, as in several cases LLVM code relies on the pool to managing the objective time lifetimes).


[1] I'd suggest you to try your sample against

Thank you very much Jose,

I will shorten the lifetimes of LLVM context and JIT engine in my DSL
project to get around the memory leak issue as I like to use the
version 3.0.