Suggestions on register allocation by using reinforcement learning

Hi everyone,

I’m quite new to LLVM and doing a Q-learning (just a hobby) on register allocation for LLVM. RegAllocRL is based on

RegAllocBase, RegAllocBasic and add some feature to implement Q-learning algorithm.

Ive currently run as MachineFunctionPass and things seem to work in simple case. In order to make progress on training,

I have questions and want to get some help:

  1. Is there any file data format used in LLVM to store information in each compilation step ? As I know, the Pass can not run several time in one compilation step, which means that I have to export data to file before compilation step finished and load from it in the next iteration.
  2. For training q-learning parameters, it would be necessary to run reg allocation pass several iterations to converge. Now, I train by compiling the target program once in each iteration and it will cost time. I wonder if I can train by running the specific reg pass several time to optimize my parameters ?

Since I’m new and this project is just experimental , I would appreciate any help and comments. Thanks in advance!

https://github.com/Knight-X/llvm/blob/reg_ml/lib/CodeGen/RegAllocRL.h

We have the .mir file format to save/restore machine functions. See also http://llvm.org/docs/MIRLangRef.html so something like
llc -stop-before=greedy something.ll -o something.mir
llc -run-pass=greedy something.mir

should allow you to run the greedy allocator separately. Note though that the MIR serialization is only a 95% solution today and doesn’t handle all programs (though it should be easy to extend/improve it, patches welcome :slight_smile:

  • Matthias