ExecutionEngine always comes back NULL

I wrote a little OS X app to assemble some LLVM (human-readable) code and run it. Unfortunately, my ExecutionEngine won't create. Just comes back NULL.

This is the code that builds it:


This is the code it seems to successfully assemble, but it can't build the ExecutionEngine. You can see I tried several different ways of building it.


The module seems to get created properly (you can see the source and the result of mod->dump()).

Is there a dylib that I need to include that has some init code that's otherwise not invoked? How can I tell why my ExecutionEngine didn't create? Is there an error code somewhere?

I based my code off the HowTouseJIT.cpp example, llvm-as.cpp, and lli.cpp. I must've overlooked something, but I'm not sure what.

Any ideas? Thanks!

Hi Rick,
I had the same problem last week I understand that I didn't initialized target.


Sorry I forgot to add code that I use to run code:

/* Executes the AST by running the main function */
GenericValue CodeGenContext::runCode() {
std::cout << "Running code...\n";
ExecutionEngine *ee = EngineBuilder(module).create();
vector<GenericValue> noargs;
GenericValue v = ee->runFunction(mainFunction, noargs);
std::cout << "Code was run.\n";
return v;

Isn't that more-or-less exactly what I have? I don't see anything about the target there.

Hi Rick,

You need to include 'llvm/ExecutionEngine/JIT.h' (or 'llvm/ExecutionEngine/JIT.h' if you want that engine) from your main file. Including that file forces the JIT static constructor to be linked into your executable. Without it, the JIT static constructor gets optimized out and you get the result you're seeing.


Wow, how obscure! Thank you; I never would've figured that out.

Why is it done that way? That seems…quite horrible, actually. Why not just instantiate the JIT on demand as part of instantiating the Engine?

Thanks again. It works now. Thank you. Thank you.

Hi Rick,
you are right!
But can you call this method EngineBuilder::setErrorStr to get creation error?


Rick Mann <rmann@latencyzero.com> writes:

I agree that it is quite horrible. I've been bitten by the same problem you had and was equally mystified. I'm not entirely sure why it was done that way. It's been like that for as long as I've been working with LLVM. I'd guess that it had something to do with being able to build and link without the JIT enabled. I know there's a specific LLVM coding standard against this sort of construct. We just haven't gotten around to cleaning this up yet.


Oh! Hmm. I immediately dismissed this as a possibility because of the name. It's very conventional for there to be set/get methods, and when I didn't find a getErrorStr() method, I figured that whole mechanism was useless.

Error strings really go against one of the design principles of LLVM, I thought: to produce machine-useable diagnostics.

Sure enough, though, that produces the error string "Interpreter has not been linked in.", which would've required me to go to the list or IRC channel again to find out what the hell that meant, after an hour of linking in more libraries to try to find the missing one. AND I probably would've assumed it was a dylib, since the linker didn't catch the error, and there are hardly any dylibs around. Which probably would've led to me asking the wrong question, and a lot of rabbit holes before coming to the right answer.

I can think of at least a couple of ways to handle that so that it's explicit: either instantiate a sub-class of ExecutionEngine that uses JIT, or instantiate a JIT explicitly to pass to ExecutionEngine. Or calling ExecutionEngine::createJIT() should cause a link-time error (but not if it's not called).