Sources on optimization and debugging

Hi Everyone –

I’m planning on using LLVM to add some optimizing compiling capability to a Byte-Code driven virtual machine that is part of my foundation platform for a series of tool products I am building. I’m still pretty new to this whole arena and am in particular curious about one important aspect: It strikes me that the more optimizations applied to code (whether at the source code, byte code, intermediate language, or assembly level, the farther from the original source is the resulting optimized code base likely to drift. This would, I’m pretty sure, complicate whatever language debugging capabilities one puts into places and make it more difficult to keep code execution aligned with a source code view in a step-debugging context.

Does anyone know of any good sources for getting a handle on this issue and understanding strategies that IDE writers adopt to allow people to step through code that has been optimized?

Any thoughts would be greatly appreciated.



The most important thing, obviously, is to make sure the compiler
generates debug information[1] and then to use a debugger that
understands the information the backend generates from that
(DWARF-format debug information on most non-Windows platforms -- I'm
not sure if LLVM supports any other debug format at the moment,

Many of the optimizers (maybe all of them by now, I'm not sure) try to
make sure to update the debug information to the best of their
ability, but it's unavoidable that the debugging experience will
deteriorate for some optimized code. (For instance, jumping
back-and-forth between consecutive source lines if the compiler is of
the opinion it's best to execute them in an intertwined manner)

[1]: Documentation at