Hi all, I’m trying to reduce the startup time for my JIT, but I’m running into the problem that the majority of the time is spent loading the bitcode for my standard library, and I suspect it’s due to debug info. My stdlib is currently about 2kloc in a number of C++ files; I compile them with clang -g -emit-llvm, then link them together with llvm-link, call opt -O3 on it, and arrive at a 1MB bitcode file. I then embed this as a binary blob into my executable, and call ParseBitcodeFile on it at startup.
Unfortunately, this parsing takes about 60ms right now, which is the main component of my ~100ms time to run on an empty source file (another ~20ms is loading the pre-jit’d image through an ObjectCache). I thought I’d save some time by using getLazyBitcodeModule, since the IR isn’t actually needed right away, but this only reduced the parsing time (ie the time of the actual getLazyBitcodeModule() call) to 45ms, which I thought was surprising. I also tested computing the bytewise-xor of the bitcode file to make sure that it was fully read into memory, which took about 5ms, so the majority of the time does seem to be spent parsing.
Then I switched back to ParseBitcodeFile, but now I added the “-strip-debug” flag to my opt invocation, which reduced the bitcode file down to about 100KB, and reduced the parsing time to 20ms. What surprised me the most was that if I then switched to getLazyBitcodeModule, the parsing time was cut down to 3ms, which is what I was originally expecting. So when lazy loading, stripping out the debug info cuts down the initialization time from 45ms to 3ms, which is why I suspect that getLazyBitcodeModule is still parsing all of the debug info.
To work around it, I can generate separate builds, one with debug info and one without, but I’d like to avoid doing that. I did some simple profiling of what getLazyBitcodeModule was doing, and it wasn’t terribly informative (spends most of its time in parsing-related functions); does anyone have any ideas if this is something that could be fixable or if I should just move on?
Any chance you can share either your bitcode file or some other bitcode file that seems about the same size and generally representative of the performance problems you’re having?
This Summer I was working on LTO and Rafael mentioned to me that debug info is not lazy loaded, which was the cause for the insane resource usage I was seeing when doing LTO with debug info. This is likely the reason that the lazy loading was so ineffective for your debug build.
Rafael, am I remembering this right/can you give more information? I expect that this will have to get fixed before pitching LLD as a turnkey LTO solution (not sure where in the priority list it is).
* Duplicate type debug information.
* All metadata (including debug info) is loaded eagerly.
As Eric mentioned, we can now merge type debug info from multiple
translation units, which results in a smaller total size. Kevin, what
llvm version are you using? Do you get a smaller combined bitcode with
trunk?
The issue of loading all of the debug info ahead of time is still
there. We will need to fix that at some point or reduce its size
further so that the impact is small enough.
You should be able to recreate the stdlib.bc and stdlib.stripped.bc files by doing:
$LLVM/Release/bin/llvm-link build/{bool,dict,file,float,gc_runtime,int,list,objmodel,rewriter,str,tuple,types,util,math,time}.o.bc -o stdlib.bc # looks like you need to give the source files in the exact same order to get the same output
$LLVM/Release/bin/opt -strip-debug -O3 stdlib.bc -o stdlib.stripped.bc
I tested it for revisions 199542 and 199954, and it looks like there’s roughly a 6% decrease in bitcode size and maybe a 10-20% improvement in loading time, which is pretty nice though it’s still about 10x slower than loading the stripped version (50ms vs 5ms).