Clang and i128

Hi all,

I currently use LLVM 3.0 clang to compile my source code to bitcode
(on a X86-64 machine) before it is later processed by a pass, like this:

$ clang -m32 -O3 -S foo.c -emit-llvm -o foo.ll

However, for some reason the the resulting module contains 128-bit
instructions, e.g.:

%6 = load i8* %arrayidx.1.i, align 1, !tbaa !0
%7 = zext i8 %6 to i128
%8 = shl nuw nsw i128 %7, 8

which the pass can't handle (and never will).

So my question is: Why does it do so? The code doesn't use integer types
larger than 32-bit. Is there an option to prevent clang from using
those types? If no, which pass might be responsible for this kind of
optimization?

Thanks in advance,
Mario

Mario,

The ScalarReplAggregates pass attempts to converts structs into scalars to allow many optimizations. Try running this pass with a different threshold or try placing a breakpoint on ConvertScalar_ExtractValue and check if you can manually disable some of the transformations in SRA.

Nadav