My name is Mohamed and I am a first year computer science masters student. I am writing this email because I am interested in the project idea “Debug Info Should have no effect on codegen”. So far I have built llvm and clang and I was trying to reproduce the problem on simple c++ programs but I can’t. Being specific this are the steps I am doing:
$ clang -g test.cc
$ llvm-objdump -d a.out > with_debug
$ clang test.cc
$ llvm-objdump -d a.out > no_debug
$ vimdiff with_debug no_debug
I still can’t reproduce the problem. In the expected results of the project you mentioned that there are already failing test cases. Can you please point me in to these cases?
Finally, I would like to ask whether there is some qualification task I should do or I have to fix some of the open tickets? Would you please point me to such tasks or suggest some tickets/bugs that I can start working on?
It’s quite possible that your test.cc is too simple. After all, we are trying our best to avoid any differences. Have you tried something larger, such as FileCheck, llc, or even clang? You can also look at the problem on a pass-by-pass basis using the debugify functionality.
I think an easy way to start here would probably be to use the test suite, or a subset of it.
I would recommend using the SingleSource tests to start out with. The MultiSource tests will lead you into having to bisect on object files, which is a bit more work. There are a couple of large SingleSource tests which I imagine might have a few differences in them.
A workflow like this might help you get started with finding examples of different codegen:
- Build, but don’t execute, the tests with debug info
- Build, but don’t execute, the tests without debug info
- Use test-suite/utils/compare.py on the output JSON reports to check if there are any differences in, say, text size (which implies different codegen)
- Bisect on object files if necessary
- Use something like bugpoint to reduce the testcase
You can build the LLVM test suite by following the instructions here:
I find this blog post pretty useful for bugpoint stuff:
We’ve got a meta-bug for these sorts of issues here: https://bugs.llvm.org/show_bug.cgi?id=37728
You might also be interested in this presentation from a few years back: https://llvm.org/devmtg/2015-04/slides/Verifying_code_gen_dash_g_final.pdf
I’ve just set off a test case reduction of a case I ran into on one of our internal test suites using the check_cfc tool so hopefully I can upload it as an example bug, attached to the metabug above once that’s complete.
Thanks Greg, that's awesome
I raised an example bug here:
Let me know if you have any trouble reproducing it from the description there. Hopefully you should just be able to build it for x86_64 with and without -g specified and see that we get a difference in code generation.
I essentially did what Jessica suggested above, although instead of the LLVM test suite I used one of our internal test suites and instead of using the compare.py script I used check_cfc which turns any differences into compilation errors, which also has the useful property of making it trivial to reduce very large test cases (I think this one started out in the 200 KLOC kind of range) into something more manageable using tools such as C-Reduce ( https://embed.cs.utah.edu/creduce/ ).