JIT Optimization Levels?

Hello,

Is there a way to control how optimized the x86 code generated by the JIT
is? I mean the actual x86 code, not the llvm IR (I know about those
optimization passes). I would like to make it as optimized as reasonably
possible.

Thank you for your time,

- Maxime

Hello,

Is there a way to control how optimized the x86 code generated by the JIT
is? I mean the actual x86 code, not the llvm IR (I know about those
optimization passes). I would like to make it as optimized as reasonably
possible.

Then the default is what you want. By default, all the codegen optimizations are being run.

Evan

I was simply surprised because some C++ code I implemented/translated into
LLVM IR ran significantly slower in the JIT than the C++ version. The code
in question was mean to implement the "plus" operator in my scripting
language, and had different behaviors depending on the type of the objects
being added. I expected it to run faster as I was eliminating a call to a
C++ function by generating the LLVM IR for what I wanted to do directly,
essentially inlining the code for the "plus" operator into the code I was
generating/JITing.

Evan Cheng-2 wrote:

I was simply surprised because some C++ code I implemented/translated into
LLVM IR ran significantly slower in the JIT than the C++ version. The code
in question was mean to implement the "plus" operator in my scripting
language, and had different behaviors depending on the type of the objects
being added. I expected it to run faster as I was eliminating a call to a
C++ function by generating the LLVM IR for what I wanted to do directly,
essentially inlining the code for the "plus" operator into the code I was
generating/JITing.

You have to profile it (or at least look at generated assembly code). Assuming all else being done correctly, the question is then whether the cost of JITing is too high for the type of code you are running.

Evan