In MCJIT, the old JIT functions are deprecated in favor of getFunctionAddress.
llvm::Function *F = M->getFunction(FuncName);
void *FN = EE->getPointerToFunction(F);
should be rewritten as
uint64_t FN = EE->getFunctionAddress(FuncName);
While functionally identical, in case the correct module is known the new version will be much slower, linear with the number of added (but not loaded) modules, since it has to (possibly) search for the correct module while old code directly searches the correct module.
To solve the issue, getFunctionAddress could get an optional Module “hint” (=NULL by default) which - if provided - will make getSymbolAddress skip calling findModuleForSymbol (a very slow operation) and instead directly use the Module provided.
I should have read this before sending my previous reply.
I’m not a big fan of default parameters, but some form of what you are suggesting may be useful. See my other comments on this topic in the other reply.
The search is linear? If that’s really true, we should fix that.
There’s probably a lot that we could do, but I can’t think of anything easy.
Basically every time we need to look up a symbol by name we’re going to each module and saying “Do you have this symbol?” It would likely be much better if we grabbed the function names from the module and did the search ourselves so that we could keep some information about the things that didn’t match and optimize the next search.
I don’t follow. Why are we looking at the module at all? That query should work even (especially) after the Module is deleted. We should be able to have a local symbol table that’s a DenseMap or something similar to resolve from names to target addresses. That map would be updated as part of the compilation when the object’s symbol table gets read.
Clearly searching for name should not be linear in modules but with a map of some kind.
After compilation to IR each module has a StringMap symbol table.
After compiling to MC and loading the object file, the dynamic linker has a StringMap symbol table for all loaded modules.
In the usual use case you’ll load module(s) into MCJIT and then compile / link them and all is well, no linear search.
The linear-search use case happens when the MCJIT sort-of lazy compiles: it gets a list of many, possibly 1000s of function-modules (one function per module) but had not bothered to compile them to MC and load them since they were not used yet. It will do so only when needed. In this case, if asked to find a function it will (currently) linear search for it module by module.
The solution would be to construct a StringMap for the later use-case, however in the first use-case it’s a wasted effort since all modules are going to be loaded anyhow and the dynamic linker will build its own map.
MCJIT doesn’t know the use-case and so one solution would be to lazy build the map only when first asked to search, assuming more searches will follow. This may be wrong assumption so maybe the better solution is to have a flag or optional function to build this map at the programmer decision who does know the use-case.
One possible optimization would be for the MCJIT not to build from scratch but merge the modules stringmaps into one large virtual stringmap - I don’t know if it’s possible with this data structure or cheaper than reconstructing. This data structure would also have to erasing the modules which are compiled.
Ah, gotcha. I was thinking of the use-case of searching after compilation. You’re right that the not-yet-compiled bits are a different sort of beast and could use some optimizing. Thanks for clarifying!