[GSoC 2018] Improve Debugging of Optimized Code

Hello,

My proposal for GSoC 2018 [attachment] about
improving debugging of optimized code was accepted.

I will work with my mentor Vedant Kumar (vsk) through the summer to
find where DI loss occurs and try to fix as many occurrences as possible.

I will also be keeping a devlog here: https://gramanas.github.io/

You can check what the first steps of my project are on the latest post:

I'd love to hear any thoughts or questions you might have.

Cheers,
Anastasis

GSoC2018_llvm_final.pdf (40.3 KB)

Welcome Anastasis! I'm very happy to see this work going forward.

I see that your first task is to make "debugify" into a function pass,
so we will be able to get data on a per-pass basis. At EuroLLVM there
was a relevant lightning talk about a different tool, see:
http://llvm.org/devmtg/2018-04/talks.html#Lightning_11
It will be extremely interesting to see how well your results line up
with the data that Greg was able to gather on how well (or poorly)
various passes handled debug info.

I hope you have a great GSOC experience here.
--paulr

Hi Anastasis,

Welcome, and congratulations!

It's great to see that you're maintaining a devlog and have hit the ground running. I'm looking forward to working with you.

Welcome Anastasis! I'm very happy to see this work going forward.

I see that your first task is to make "debugify" into a function pass,
so we will be able to get data on a per-pass basis. At EuroLLVM there
was a relevant lightning talk about a different tool, see:
http://llvm.org/devmtg/2018-04/talks.html#Lightning_11
It will be extremely interesting to see how well your results line up
with the data that Greg was able to gather on how well (or poorly)
various passes handled debug info.

+ 1

vedant

Absolutely. As I said in my lightning talk at EuroLLVM, I think the two approaches (debugify vs dexter) will be very nicely complementary to each other. Improvements made from analysis of one approach should be verifiable from the other which I think could be very powerful. I’m hoping my tool will be open sourced very very soon (I was disappointed not to have managed it in time for my talk), but regardless I’ll be continuing to add more test coverage and measure and share results in the meantime.

As the talk video and poster aren’t up yet, my quick description of DExTer is that it automatically drives the debugger to step through some test code, compares against some set of expectations and uses a simple heuristic to assign an overall score to the quality of the experience (see
https://bugs.llvm.org/show_bug.cgi?id=37234 for an example of using it to spot a change in the user debugging experience).

My hope is that by the end of your project we’ll be able to show a steady improvement in the score across a number of examples as a result of your project and any other work underway at the same time to improve optimized debugging! I’ll be watching with great interest. Let me know if I can help at all in any way. Good luck!

-Greg