I've noticed a slight difference between the code emitted by LLVM-GCC
and what the online demo emits. For instance, the C code:
malloc (5 * sizeof(int))
becomes
malloc [5 x int]
when using the online demo, and
call @malloc (i32 20)
when using llvm-gcc.
I understand that the effective behavior is the same thing, but for
some work that I'm doing it's much better to have the malloc [5 x
int], because it is more detailed. Is there some way to get llvm-gcc
to emit this information, or an optimization pass I can run to recover
this more detailed information?
Thanks,
Ben Chambers
I've noticed a slight difference between the code emitted by LLVM-GCC
and what the online demo emits. For instance, the C code:
The primary reason for this is that the demo page is running LLVM 1.9, where mainline more closely resembles LLVM 2.0. When 2.0 is done (real soon now) we'll switch it over.
malloc (5 * sizeof(int))
becomes
malloc [5 x int]
when using the online demo, and
call @malloc (i32 20)
when using llvm-gcc.
I understand that the effective behavior is the same thing, but for
some work that I'm doing it's much better to have the malloc [5 x
int], because it is more detailed. Is there some way to get llvm-gcc
to emit this information, or an optimization pass I can run to recover
this more detailed information?
Are you passing -O3 to llvm-gcc? This could be a case where the raiseallocs pass is missing turning the malloc call into a malloc instruction. If -O3 doesn't help, please file a bugzilla and we can fix it. Thanks!
-Chris