Learning Loop Transformation Policies

This project is a continuation of last year’s. In 2021, the project achieved its first milestone - separating correctness decisions from policy decisions. This opens up the possibility of replacing the latter with machine-learned ones. Rough milestones:

  • select an initial set of features and use the existing ML Guided Optimizations (MLGO) infra to generate training logs
  • define a reward signal, computable at compile time, to guide a reinforcement learning training loop
  • iterate through training and refine reward/feature set

Please contact @mtrofin and @jdoerfert for more details.

I would be interested in working on this project for GSOC 2022. I have some experience with machine learning but I have somewhat limited experience working on compilers. Where is the best place to discuss this proposal before I work on writing up the formal GSOC proposal?

We have a slack channel, if you want to join - I need your email for that.

agrossman154 at yahoo.com

done

done

Hi, sorry to bother you. I know it’s late at this point for the GSoC 2022, but I’m interested in this project and want to join you. I have some experience in loop transformation in high-level-synthesize and neural network. Could you please let me join? My email is jiamin9527 at gmail.com. Thank you.

Added you to slack, let’s chat about options there.