Linking libc statically to program and optimizations.

Hi,

We have been working on porting llvm-gcc crosscompiler (basically I had to create new dummy target configuration with some minimal information about the our processor, endianess, type sizes, etc.) which compiles llvm bytecode (doesn't compile native binaries nor assembler) for our processor architecture and new llvm target for our custom processor. We already managed to compile also newlib to llvm bytecode (archive of bytecode objects packed with llvm-ar) with our crosscompiler.

Right now we use tools like following. We first compile bytecode files with cross-llvm-gcc and also link them together with cross-llvm-gcc command for automatically including precompiled crt0, crtend and libc.a files to fully linked bytecode program. After linking we run various "opt" passes and finally we compile target assembler with llc.

Now we ran into two problems:

1. When we link libc statically to our program in early phase of the compilation linker automatically selects only those compile units, which contain needed symbols from libc.a archive. So when cross-llvm-gcc encounter malloc calls, they are automatically converted to malloc instructions. So now when libc is linked statically, linker doesn't see any malloc function references, and doesn't include malloc compilation module from libc.a.

Now it's fixed by lowering malloc instructions of program directly after each "cross-llvm-gcc -c" command. Other approach to this problem was putting libc.a together with "llvm-ld -r" command instead of llvm-ar... This way whole libc is always included to optimization stage and calling lowerallocs pass before dead code elimination passes. Disadvantages is this approach was couple of seconds delay when optimizing program and a bit larger binary (reasons for larger binary I haven't investigated yet).

2. Memset, memcpy are replaced, with llvm intrinsics and because of that implementations of those libc functions are optimized away before llc phase.

I would like to have some comments especially how the lowering allocs in early stage (1. problem) in the tce-llvm-gcc effects on optimization of code and if there is way to lower also memcpy and memset intrinsics in optimization phase for preventing elimination of implementation of these functions. Also all the other comments are more than welcome.

-mikuli

Hi,

I'll just reply to here if someone else encounters the same
problem.

With llvm-gcc -ffreestanding switch prevents raising the
memset, memcpy etc. function calls to llvm intrinsics.

-mikuli

Mikael Lepistö wrote:

We have been working on porting llvm-gcc crosscompiler (basically I had
to create new dummy target configuration with some minimal information
about the our processor, endianess, type sizes, etc.) which compiles
llvm bytecode (doesn't compile native binaries nor assembler) for our
processor architecture and new llvm target for our custom processor. We
already managed to compile also newlib to llvm bytecode (archive of
bytecode objects packed with llvm-ar) with our crosscompiler.

cool

Right now we use tools like following. We first compile bytecode files
with cross-llvm-gcc and also link them together with cross-llvm-gcc
command for automatically including precompiled crt0, crtend and libc.a
files to fully linked bytecode program. After linking we run various
"opt" passes and finally we compile target assembler with llc.

Ok.

Now we ran into two problems:

1. When we link libc statically to our program in early phase of the
compilation linker automatically selects only those compile units, which
contain needed symbols from libc.a archive. So when cross-llvm-gcc
encounter malloc calls, they are automatically converted to malloc
instructions. So now when libc is linked statically, linker doesn't see
any malloc function references, and doesn't include malloc compilation
module from libc.a.

Right. Also, if you do certain operations that aren't supported by your hardware (a common one is 64-bit integer divide/rem), you'll get calls into libgcc to do these operations.

Now it's fixed by lowering malloc instructions of program directly
after each "cross-llvm-gcc -c" command. Other approach to this problem
was putting libc.a together with "llvm-ld -r" command instead of
llvm-ar... This way whole libc is always included to optimization stage
and calling lowerallocs pass before dead code elimination passes.
Disadvantages is this approach was couple of seconds delay when
optimizing program and a bit larger binary (reasons for larger binary I
haven't investigated yet).

2. Memset, memcpy are replaced, with llvm intrinsics and because of that
implementations of those libc functions are optimized away before llc phase.

right.

I would like to have some comments especially how the lowering allocs in
early stage (1. problem) in the tce-llvm-gcc effects on optimization of
code and if there is way to lower also memcpy and memset intrinsics in
optimization phase for preventing elimination of implementation of
these functions. Also all the other comments are more than welcome.

On the one hand, I don't think that this problem is solvable in general: there are lots of miscellaneous places that can introduce new symbols. Many of these can be fixed (e.g. pass -fno-builtins, or -ffree-standing), but other's can't really be (libgcc functions).

On the other hand, it's not super important to fix these. None of these functions can be meaningfully inlined or optimized profitably, so there isn't a great need to have these in llvm form.

I suggest compiling these functions to native .o files, putting them into an archive, and linking the archive into the app after the LLVM IPO pieces are done. This way you get LLVM IPO, and you get full support for arbitrary lowered library calls. Since the native code is in a .a file, they are only linked in if referenced.

-Chris