CFLAA

Hi David,

I am the one who's responsible for CFLAA's refactoring in the summer. I've sent out another email on llvm-dev, and you can find more about my work in my GSoC final report.

I think it is fantastic that you have done such an interesting work. I'll definitely try to help getting the code reviewed and merged in the current. After a quick glance at your patch, it seems that what you are trying to do there is an optimized version of CFL-Steens, with a custom way of handling context-sensitivity. I'll be happy if we can end up integrating it into the existing CFL-Steens pass.

Regarding the benchmark numbers, I'm very interested in what kind of tests files were you running the experiments on? Is it possible to share it?

Hi Jia, nice to meet you,

Hi David,

I am the one who's responsible for CFLAA's refactoring in the summer.
I've sent out another email on llvm-dev, and you can find more about my
work in my GSoC final report.

Is this report available?

I think it is fantastic that you have done such an interesting work.
I'll definitely try to help getting the code reviewed and merged in the
current. After a quick glance at your patch, it seems that what you are
trying to do there is an optimized version of CFL-Steens, with a custom
way of handling context-sensitivity. I'll be happy if we can end up
integrating it into the existing CFL-Steens pass

The work was more about improving the accuracy of the equivalencing step
then it
is about context sensitivity. In fact, it is only context-sensitive to the
extent there is simulated inlining. There is now downward propagation of
facts into called functions.

I wanted to share it incase there were lessons of value. It is not in a
very
clean state at the moment but I can clean it up. Let me know how I can
help.

Sorry, I forgot your last question,

The benchmarks were a rather arbitrarily selected set of files out of
Facebook¹s codebase
so not really suitable to share.

Hi Jia, nice to meet you,

Hi David,

I am the one who's responsible for CFLAA's refactoring in the summer.
I've sent out another email on llvm-dev, and you can find more about my
work in my GSoC final report.

Is this report available?

Yes. You can find the PDF in my github repository: GitHub - grievejia/GSoC2016: Work product of my Google Summer of Code 2016 project

Sorry, I forgot your last question,

The benchmarks were a rather arbitrarily selected set of files out of
Facebook¹s codebase
so not really suitable to share.

Thanks for the info!

I asked the question with the intention to better understand the results you posted. Based on my own experience, how the benchmarks are written sometimes has a noticeable impact on the effectiveness of cfl-aa. For example, if the codes are written in such a way that a large buffer gets allocated first and then the majority of the program logic deal with pointers obtained by offsetting into this buffer, it is unlikely that the current implementation of cfl-aa will produce any useful results given its field-insensitive nature. Identifying common program idioms like this and adapt cfl-aa and its clients to handle them better is one of the thing I'd be interested to look into.