Hi folks!

I am currently involved in project which uses LLVM JIT and I have a couple of questions for you:

  1. According to http://llvm.org/Features.html LLVM JIT supports only X86 and PowerPC, but LLVM code generator supports ARM too. Are there any plans for JIT to make support also for ARM backend?

  2. Could LLVM JIT collect some hot traces from overall code and execute it separately (after making aggressive optimizations on it) from the cold code with higher performance?

That web page is dated. The old JIT used to use it’s own encoders separate from the standard assembly pipeline. Today we use MCJIT, which builds object files the normal way in memory and then relocates and runs them.

I updated the web page in r233815.

[+Lang, weilder of the +1 cattle prod of JIT taming]

  1. Not sure - I haven’t heard of any. (the usual “patches welcome” but I’m not sure what is missing on ARM so I couldn’t really tell you where to start (testing it’s probably the first step in any case))

  2. Possible, but I think for now that’s been considered to be outside the purview of the JIT APIs themselves (even the new/fancy Orc JIT) - in part because LLVM’s JIT is a bit too heavyweight to make a good first-line JIT for many use cases, so people usually have to build that sort of infrastructure into layers that don’t even reach LLVM’s JIT (eg: a first-pass splatting-style ‘JIT’ that is fast to run but produces the most abysmal code but without the overhead of building LLVM IR, etc). I don’t doubt that eventually building some layers into the composable Orc JIT for handling this sort of thing might be entirely likely.

Hi Marat,

As Reid mentioned, LLVM’s newer JIT APIs (MCJIT and Orc) operate on object files under-the-hood. That means that ARM support will vary from format to format, and consequently platform to platform. I know our support for ARM in MachO is reasonable as I did some work to improve it a few months back, but I’m not sure what the state of the support is on ELF, and I don’t think there’s any COFF support yet.

I would recommend the approach that Dave suggested: Try it and see. Please file bugs where you run in to trouble - missing relocation support (the most common problem) is usually quite easy to fix.

Regarding trace optimization, as Dave said there is no support currently. You will have to write your own IR transformations to optimize traces (If you feel they’re generically useful you could contribute them back to trunk). If you’re going to do optimize traces for a running program (rather than between runs) then you may find the Orc APIs useful for reentering the JIT.