I’m interested in development of multi threaded applications using LLVM. I would like to ask, is there a possibility in LLVM to create several threads, executing software concurrently, while sharing the same memory space?
If yes, on which level this can be done (e.g. context, execution engine,…)?
Creating threads is the responsibility of the operating system or runtime system. It will provide functions that can be called from assembly language or from a language such as C.
You can of course write such C programs (or other languages) and then compile them using Clang & LLVM, but LLVM itself does not know what, for example, pthread_create() means. It’s just a library function.
thank you for your answer.
What I want to know is following: Imagine that I create threads in some way. I want threads to execute the same BC, while using the same memory space. I know that LLVM IR has a nice structure: Context - Module - Execution engine. Is there a way to run several instances of LLVM (using threads for example) with the same memory space?
supports concurrent execution of its API ``on the LLVM Context level.
You can see [http://llvm.org/docs/ProgrammersManual.html#threads-and-llvm](http://llvm.org/docs/ProgrammersManual.html#threads-and-llvm)
I think your question is a result of misunderstanding the nature of LLVM. It is not a virtual machine, it is an abstract machine and a set of tools implementing compilers for that abstract machine. You don't run the bitcode (well, you can interpret it, but you don't usually), you compile the bitcode and then run the resulting code. If you want to run the same code on all threads, then you just use the LLVM [MC]JIT to compile it once and then you call the generated functions from different threads.
Do you want to write a program to load LLVM libraries, spawn a few
threads and use the LLVM libraries inside each thread? And, if you do,
are you trying to understand where the cut is, regarding shared memory
and thread-local memory, WRT the LLVM API?
IIUC, you should have a separate Module per thread, to be able to
compile/run on each thread. AFAIK, the ExecutionEngine has no concept
of multiple threads, so you would have to use one EE per thread, which
is wasteful. But that was my understanding a few years ago, and it
might have changed.
They may have answered your question already.
thank you for your answers. I’ll explain my problem with giving you a concrete details.
I have a MAIN program which includes LLVM libraries and loads one bc file. From MAIN I execute different functions from the bc file using LLVM interpreter. On the LLVM website it is said:
“LLVMContext is an opaque class in the LLVM API which clients can use to operate multiple, isolated instances of LLVM concurrently within the same address space.” - It seems that this is not correct or I’m not understanding it properly.
One context is used to create two modules, and based on modules all other things needed for the execution (engine builders, execution engines, functions). When functions are executed (sequentially), they are obviously executed in different memory space. I have verified this over access to the global variable.
However, if functions are executed using the same execution engine (context, module and execution builder the same), then they are executed in the same memory space. But using one execution engine is not suitable for threads, because one execution engine means one stack.
One option is to use different stack frames. Before trying that, I just wanted to ask is there a different way for sharing memory space between more execution engines?
It would help if you would explain your reason for wanting to use the interpreter. If you want threads, typically it is because you want performance and so it would make more sense to compile the bitcode once with the JIT in the main function and then call the various functions from your threads.
I’m performing certain analysis with LLVM and interpreter is more suitable.
Do you perhaps know is there a way to do things that I previously described using interpreter?