I’m a green hand here. Is there an approach to call c++ codes in LLVM IR? For example, there is a c++ method defined outside the main function, how can we call it via LLVM IR?
int foo(int a, int b)
{
return a + b;
}
int main()
{
…
std::string err;
EngineBuilder EB(std::move(std::unique_ptr(module)));
EB.setEngineKind(EngineKind::JIT).setErrorStr(&err);
ExecutionEngine* EE = EB.create()
// How can we call foo() via LLVM IR?
…
}
And, if that foo function is an API defined in a dll, how to do it? I’d appreciate it if any example is provided.
If you want to make foo available to the jit through the ExecutionEngine, there is more plumbing to do but that’s a JIT question, we have a wrapper for this in MLIR and if you look at the implementation you can figure the JIT APIs for this: llvm-project/Invoke.cpp at main · llvm/llvm-project · GitHub
I think it should be the second. I want to invoke a function anytime and anyplace in llvm when I want to. And that function is a method defined in dll. So it’s a JIT question, yes?Actually I’m not quite clear about the concept JIT and LLVM IR.
I reviewed the example you mentioned. And I have 3 questions about that:
What do I need to do to prepare for MLIR environment ?Now I just downloaded LLVM 13.0.0 and compiled.
How is the function memrefMultiple is invoked?it’s registered into a map with “_mlir_ciface_callback”, but invoked by “caller_for_callback”?How does it wok?
I noted that there’s a snippet about moduleStr in line 240~247. What is it used for?
The JIT means that you will build LLVM IR, turn it into machine code, and load it and execute in the current process. The non-JIT case is more like clang: you generate machine code, likely link it, and then execute it as a separate process.
To execute this unit test, see: Getting Started - MLIR
But note that I linked to this MLIR example here as one project that is using the underlying LLVM JIT
The moduleStr is the IR: it isn’t directly LLVM IR and instead is a piece of MLIR that will be turned to LLVM IR in two steps: lowerToLLVMDialect(*module) line 254 and then implicitly in ExecutionEngine::create(*module);.
The interesting part is that call @callback(%arg0, %coefficient) : (memref<?x?xf32>, i32) -> () is a function call to a function named callback in MLIR. When converting to LLVM IR, the MLIR machinery prefix it with _mlir_ciface_. So the LLVM IR will call a function _mlir_ciface_callback.
To visualize the LLVM IR, I quickly modified the code:
diff --git a/mlir/lib/ExecutionEngine/ExecutionEngine.cpp b/mlir/lib/ExecutionEngine/ExecutionEngine.cpp
index 00569e1d4242..4c31451ea4f5 100644
--- a/mlir/lib/ExecutionEngine/ExecutionEngine.cpp
+++ b/mlir/lib/ExecutionEngine/ExecutionEngine.cpp
@@ -317,7 +317,7 @@ Expected<std::unique_ptr<ExecutionEngine>> ExecutionEngine::create(
.setCompileFunctionCreator(compileFunctionCreator)
.setObjectLinkingLayerCreator(objectLinkingLayerCreator)
.create());
-
+ llvmModule->dump();
// Add a ThreadSafemodule to the engine and return.
ThreadSafeModule tsm(std::move(llvmModule), std::move(ctx));
if (transformer)
I used ... to strip the long section of IR, the interesting part is likely that declare void @_mlir_ciface_callback({ float*, float*, i64, [2 x i64], [2 x i64] }* %0, i32 %1) is the only function that does not have a definition, even though it is called: call void @_mlir_ciface_callback({ float*, float*, i64, [2 x i64], [2 x i64] }* %16, i32 %7), !dbg !7
To be able to link the binary resulting from compiling this with LLVM, we’ll need to provide a definition for _mlir_ciface_callback.
This is where the unit-test is interesting, because we implemented this function with a local/static function that is named memrefMultiply. However we tell the JIT in the link I posted above to register this function with its internal dynamic linker so that it is known to provide the definition for _mlir_ciface_callback. That way when the LLVM IR above is compiled and then loaded in the JIT, the definition for _mlir_ciface_callback resolved to memrefMultiply.
When I investigated this case, I reviewed git history and noted that there is a change made on llvm/examples/Kaleidoscope/Chapter4/toy.cpp, adding a prefix DLLEXPORT for functions putchard() and printd(). You made that change, am I right?
Actually the example in that file is what I want exactly, invoking putchard() via JIT. Now I have another question, related to that change. Is there an approach to make the function available without DLLEXPORT ahead of it? I have tried that and got a message ‘JIT session error: Sybols not found: [ putchard ]’. Do you have any idea about that?