Building for a specific target, corei7

Hi,

I am using the LLVM JIT infrastructure (MCJIT). I wanted to see if there are any performance gains as the compiler can detect the target CPU at runtime. But, I didn’t see any improvement (I compile with –no-mmx and –no-sse).

I then tried an experiment, where I compiled the program with clang-3.3, with and without specifying the target cpu as “corei7”. I was shocked to see that the only difference in the two binaries were related to “Instruction Set Extensions”.

Further I tried the same experiment with gcc, and saw that the instructions were shuffled around in the binary. I expected this, because every CPU differs in some way or the other (has different buffer size for out-of-order execution, different cache sizes, etc.).

For clang, I was passing the “-march=corei7” flag.

For gcc, I was passing the “-mcpu=corei7” flag.

Am I passing the correct flags?

Any help, comments or suggestions, would be helpful.

Thanks,

Hi Varun,

Have you tried your experiment with icc by any chance?

The MCJIT component does not assume that you will be executing the generated code on the host system because it can be used to generate code for external targets. However, you can specify the CPU by calling setCPU() on the EngineBuilder object before creating your execution engine. (You can use sys::getHostCPUName() to figure out what CPU you are running on and that will further detect AVX support, which you don’t get with the general “corei7” cpu flag.) I would expect that if you do that it would generate similar code to clang.

-Andy

Hi Andrew,

I think I diluted my question. My question was not related to MCJIT.

I ran the following 4 scenarios:

(1)gcc –mcpu=corei7 tetris.c –o tetris

(2)gcc –mcpu=athlon64 tetris.c –o tetris

(3)clang –march=corei7 tetris.c –o tetris

(4)clang –march=athlon64 tetris.c –o tetris

In (1) and (2), I see difference in order of instructions in the output binaries, which I expected because every CPU has different micro-architecture, and the compiler is hopefully making use of that information. (I need to verify the performance improvement, this is not related to my question)

But, in (3) and (4), I don’t see any difference in the output binaries other than instruction set extensions. This means that there is some optimization, but not based on the micro-architecture of the CPU.

I just want to ask if this is the expected behavior. And if so, then is this optimization going to be added to LLVM anytime in the future?

Thanks,

Varun Agrawal

Hi Varun,

I see the point of your question, but I’m not the best person to answer from that perspective.

Nadav Rotem is the owner of the x86 backend, and he can probably give you a more complete answer than I could.

Thanks,

Andy

Hi Andrew and Varun,

The most interesting additions to the x86 instruction set since the move to 64bits was the addition of additional vector instructions. If your code is not vectorizable then you should see similar code. With the new MI Scheduler (to be enabled by default soon) you may see greater differences in the binary, because it has a better machine model. At the moment we don’t add code that checks for the CPUID at runtime, but this is an interesting feature to discuss. If during your analysis you run into interesting findings then please share them with us on the mailing list. We are constantly looking for opportunities to improve the compiler.

Thanks,
Nadav