As previous mentioned here, I think the best design would be to JIT
code fast (using the FOO type) and then allow the user to build to
some other format later if he/she wants. Reloading pre-JITed functions
is a feature I'd like to see, because sometimes you have to JIT fast
an inefficient function just to get it working and later optimize it.
If you could save the functions for latter use would be a major
And I know I don't engage lots of talks here (usually I'm just a
reader), but I'm trying to build a game based on JIT compilation for
everything, including add-ons, patches and user scripts. So I just
follow the JIT part of LLVM, but if there is anything I can help, I'd
in my own VM effort (not LLVM based) I have been (for a very long time) working typically by producing object files in memory, and then "linking" them however is needed.
yeah, even for JIT, I usually actually produce both textual ASM, convert this into object files (via an "assembler" library), and link these (via a "linker" library, which shares the same DLL/SO as the assembler for historical reasons).
some people have complained to me that all this would be too slow, but in practice I have had nowhere near the levels of extreme code-spewing to where this would actually effect much (and, meanwhile, textual ASM is much nicer to work with IMO).
with some tweaks, it is possible to process in excess of 15MB of textual ASM per second, which seems plenty good enough (though with default settings it is a little slower, around 2MB/s, due to supporting ASM macros and using multiple-passes to compact jumps and similar).
currently all this is x86 and x86-64 only...
my assembler also uses a variant of NASM's syntax. basic syntax is about the same, but the preprocessor is different and many minor differences exist (including some extensions), but it is possible to write code which works with both (with some care).
GC'ed JIT is also supported (where the linker links the objects into GC'ed executable memory). this is mostly used for one-off executable objects (typically implementing closures and special purpose thunks, which are usually used as C function pointers).
I am aware of the SELinux issue, but haven't fully added support for it yet (lower priority, as I mostly develop on/for Windows...). mostly it would be done via using a software write barrier to redirect writes to the alternate memory address (or similar).
single-mapping would still be used on systems supporting read/write/execute memory.
typically, I am using COFF internally, even on Linux and similar.
caching object files to disk is done by several of my frontends, because yes, it is sort of pointless to endlessly recompile the same code every time the app starts or similar (especially since my C compiler is slow...).