As you may be aware if you've been watching llvm-commits, I've been working recently on improving the ability to use LLVM across multiple threads. While the goal for now is to be able to hack on multiple Module's in parallel, this has necessitated a larger review of how LLVM interacts with threads.
In a recent(-ish) patch, I added a new API: llvm_start_multithreaded(). This API is used to initialize the thread-safety constructs that are going into LLVM. I will be committing a detail update to the LLVM programmer's manual on how to use it.
The major CHANGE is: the JIT will no longer be safe for executing threaded applications without first invoking llvm_start_multithreaded(). Please begin to update your client applications now if this affects you, as I will be throwing the switch in SVN sooner rather than later.
This question is a bit of far away future thought:
There's traditionally been a fundamental assumption that static
compilers are single-threaded. Many build systems assume this and
support assigning compilation jobs with one job per processor.
If the compiler becomes multi-threaded internally, how should the
build system best schedule compilation jobs?
That's somewhat orthogonal to what is being worked on right now.
This is not an effort to make LLVM internally multithreaded, but to allow clients of the LLVM APIs to make multiple, concurrent calls into the APIs from multiple threads, which obviously necessitates guarding our internal state with mutexes. Threading LLVM itself is another, even larger, undertaking.
After some further reflection on this point, I don't think it will be necessary to make llvm_start_multithreaded() strictly required for all applications that want to host threaded applications in the JIT. It is, however, strongly recommended, especially if you're using side exits, to guard against to possibility of accidental concurrent calls into the LLVM API.
I still welcome any comments, particular from VM implementers.