Dear All,
1) I recall reading somewhere that a few optimizations in LLVM 2.6 strip away debug information when such information interferes with optimization. Is this correct, and if so, does anyone know off-hand which optimizations do this?
2) I believe a new debug annotation design is being implemented in mainline LLVM for the 2.7 release. What is the current status of this work? Does it already yield more accurate debug information than LLVM 2.6?
Thanks in advance for any answers.
-- John T.
Hi John,
Dear All,
1) I recall reading somewhere that a few optimizations in LLVM 2.6 strip
away debug information when such information interferes with
optimization. Is this correct,
Yes.
and if so, does anyone know off-hand
which optimizations do this?
The optimizer does not emit any statistics here. You can search
various transformation passes' code to see whether they are handling
DbgInfoIntrinsics or not. Or may be I misunderstood your question ?
2) I believe a new debug annotation design is being implemented in
mainline LLVM for the 2.7 release. What is the current status of this
work? Does it already yield more accurate debug information than LLVM 2.6?
It is still a work in progress. However, now it yields better scoping
information when clang FE is used to generate debug info.
Devang Patel wrote:
Hi John,
Dear All,
1) I recall reading somewhere that a few optimizations in LLVM 2.6 strip
away debug information when such information interferes with
optimization. Is this correct,
Yes.
and if so, does anyone know off-hand
which optimizations do this?
The optimizer does not emit any statistics here. You can search
various transformation passes' code to see whether they are handling
DbgInfoIntrinsics or not. Or may be I misunderstood your question ?
Thanks for the information.
You've understood me correctly. I was just hoping that somebody could save me the trouble of looking at all the optimizations to see which ones zap debug information.
2) I believe a new debug annotation design is being implemented in
mainline LLVM for the 2.7 release. What is the current status of this
work? Does it already yield more accurate debug information than LLVM 2.6?
It is still a work in progress. However, now it yields better scoping
information when clang FE is used to generate debug info.
Does the debug facilities in LLVM TOT, at present, maintain information better than LLVM 2.6 (i.e., if a front-end puts the debug information in, will the optimizations not take it out)? Does the information that the llvm-gcc front-end adds comparable to what llvm-gcc in LLVM 2.6 does?
The problem that I'm having is that the SAFECode debug tool uses LLVM debug information to print out nice error messages saying, "Your buffer overflow is at line <x> in source file <file.c>." Before transforms started removing debug information, my tool would print out source line/file info that was acceptably accurate. After the change, it started printing out horribly wrong information (e.g., giving the source and line number of some inlined function called by the function that actually generated the memory error).
I'm not so concerned with debug information being more accurate (although I like that, too). I'm more concerned with how well existing debug information is retained during optimizations.
-- John T.
Hi John,
Does the debug facilities in LLVM TOT, at present, maintain information
better than LLVM 2.6 (i.e., if a front-end puts the debug information in,
will the optimizations not take it out)? Does the information that the
llvm-gcc front-end adds comparable to what llvm-gcc in LLVM 2.6 does?
The FE has not changed significantly, other than bug fixes to improve
debug info.
The problem that I'm having is that the SAFECode debug tool uses LLVM debug
information to print out nice error messages saying, "Your buffer overflow
is at line <x> in source file <file.c>." Before transforms started removing
debug information, my tool would print out source line/file info that was
acceptably accurate. After the change, it started printing out horribly
wrong information (e.g., giving the source and line number of some inlined
function called by the function that actually generated the memory error).
This is because, while cloning function body during inlining, the
location information is also cloned, as expected. However, the inliner
should update location information to indicate that this instruction
is inlined at this location. I have local patch in my tree that is
waiting finishing touch in codegen before I commit. I pasted it below
for your reference.
Now, each instruction can have location information attached with it.
The location info (DILocation) include
- unsigned - line number
- unsigned - col number
- DIScope - lexical scope
- DILocation - inlined at location
When an instruction is inlined, the first three fields will stay
unchanged. But the inlined instruction will have non null "inlined at
location" indicating where this instruction was inlined.
Devang Patel wrote:
This is because, while cloning function body during inlining, the
location information is also cloned, as expected. However, the inliner
should update location information to indicate that this instruction
is inlined at this location. I have local patch in my tree that is
waiting finishing touch in codegen before I commit. I pasted it below
for your reference.
Now, each instruction can have location information attached with it.
The location info (DILocation) include
- unsigned - line number
- unsigned - col number
- DIScope - lexical scope
- DILocation - inlined at location
When an instruction is inlined, the first three fields will stay
unchanged. But the inlined instruction will have non null "inlined at
location" indicating where this instruction was inlined.
Devang,
Could this approach be used for macros? I can single step through macro bodies, but I can't set a breakpoint at the point of the macro invocation. It sounds like this will allow it.
http://ellcc.org/images/ssmacros.png
-Rich