Register Allocation Graph Coloring algorithm and Others

Hi GCC and LLVM developers,

I am learning Register Allocation algorithms and I am clear that:

* Unlimited VirtReg (pseudo) -> limited or fixed or alias[1] PhysReg (hard)

* Memory (20 - 100 cycles) is expensive than Register (1 cycle), but it has to spill code when PhysReg is unavailable

* Folding spill code into instructions, handling register coallescing, splitting live ranges, doing rematerialization, doing shrink wrapping are harder than RegAlloc

* LRA and IRA is default Passes in RA for GCC:

$ /opt/gcc-git/bin/gcc hello.c
DEBUG: ../../gcc/lra.c, lra_init_once, line 2441
DEBUG: ../../gcc/ira-build.c, ira_build, line 3409

* Greedy is default Pass for LLVM

But I have some questions, please give me some hint, thanks a lot!

* IRA is regional register allocator performing graph coloring on a top-down traversal of nested regions, is it Global? compares with Local LRA

* The papers by Briggs and Chaiten contradict[2] themselves when examine the text of the paper vs. the pseudocode provided?

* Why interference graph is expensive to build[3]?

And I am practicing[4] to use HEA, developed by Dr. Rhydian Lewis, for LLVM firstly.

[1] ⚙ D39712 [ARM] Add an alias for psr and psr_nzcvq

[2] http://lists.llvm.org/pipermail/llvm-dev/2008-March/012940.html

[3] GitHub - jotaviobiondo/llvm-register-allocator: A graph coloring register allocator for LLVM.

[4] https://github.com/xiangzhai/llvm/tree/avr/include/llvm/CodeGen/GCol

Hi GCC and LLVM developers,

I am learning Register Allocation algorithms and I am clear that:

* Unlimited VirtReg (pseudo) -> limited or fixed or alias[1] PhysReg (hard)

* Memory (20 - 100 cycles) is expensive than Register (1 cycle), but it has to spill code when PhysReg is unavailable

It might be much less if memory value is in L1 cache.

* Folding spill code into instructions, handling register coallescing, splitting live ranges, doing rematerialization, doing shrink wrapping are harder than RegAlloc

RegAlloc is in a wide sense includes all this tasks and more. For some architectures, other tasks like a right live range splitting might be even more important for generated code quality than just better graph coloring.

* LRA and IRA is default Passes in RA for GCC:

$ /opt/gcc-git/bin/gcc hello.c
DEBUG: ../../gcc/lra.c, lra_init_once, line 2441
DEBUG: ../../gcc/ira-build.c, ira_build, line 3409

* Greedy is default Pass for LLVM

But I have some questions, please give me some hint, thanks a lot!

* IRA is regional register allocator performing graph coloring on a top-down traversal of nested regions, is it Global? compares with Local LRA

IRA is a global RA. The description of its initial version can be found

LRA in some way is also global RA but it is a very simplified version of global RA (e.g. LRA does not use conflict graph and its coloring algoritm is closer to priority coloring). LRA does a lot of other very complicated things besides RA, for example instruction selection which is quite specific to GCC machine description. Usually code selection task is a separate pass in other compilers. Generally speaking LRA is more complicated, machine dependent and more buggy than IRA. But fortunately LRA is less complicated than its predecessor so called reload pass.

IRA and LRA names have a long history and they do not reflect correctly the current situation.

It would be possible to incorporate LRA tasks into IRA, but the final RA would be much slower, even more complicated and hard to maintain and the generated code would be not much better. So to improve RA maintainability, RA is divided on two parts solving a bit different tasks. This is a typical engineering approach.

* The papers by Briggs and Chaiten contradict[2] themselves when examine the text of the paper vs. the pseudocode provided?

I don't examine Preston Briggs work so thoroughly. So I can not say that is true. Even so it is natural that there are discrepancy in pseudocode and its description especially for such size description.

For me Preston Briggs is famous for his introduction of optimistic coloring.

* Why interference graph is expensive to build[3]?

That is because it might be N^2 algorithm. There are a lot of publications investigating building conflict graphs and its cost in RAs.

And I am practicing[4] to use HEA, developed by Dr. Rhydian Lewis, for LLVM firstly.

When I just started to work on RAs very long ago I used about the same approach: a lot of tiny transformations directed by a cost function and using metaheuristics (I also used tabu search as HEA). Nothing good came out of this.

If you are interesting in RA algorithms and architectures, I'd recommend Michael Matz article

ftp://gcc.gnu.org/pub/gcc/summit/2003/Graph%20Coloring%20Register%20Allocation.pdf

as a start point.

Hi Vladimir,

Thanks for your kind and very detailed response!

Hi GCC and LLVM developers,

I am learning Register Allocation algorithms and I am clear that:

* Unlimited VirtReg (pseudo) -> limited or fixed or alias[1] PhysReg (hard)

* Memory (20 - 100 cycles) is expensive than Register (1 cycle), but it has to spill code when PhysReg is unavailable

It might be much less if memory value is in L1 cache.

* Folding spill code into instructions, handling register coallescing, splitting live ranges, doing rematerialization, doing shrink wrapping are harder than RegAlloc

RegAlloc is in a wide sense includes all this tasks and more. For some architectures, other tasks like a right live range splitting might be even more important for generated code quality than just better graph coloring.

* LRA and IRA is default Passes in RA for GCC:

$ /opt/gcc-git/bin/gcc hello.c
DEBUG: ../../gcc/lra.c, lra_init_once, line 2441
DEBUG: ../../gcc/ira-build.c, ira_build, line 3409

* Greedy is default Pass for LLVM

But I have some questions, please give me some hint, thanks a lot!

* IRA is regional register allocator performing graph coloring on a top-down traversal of nested regions, is it Global? compares with Local LRA

IRA is a global RA. The description of its initial version can be found

https://vmakarov.fedorapeople.org/vmakarov-submission-cgo2008.pdf

I am reading this paper at present :slight_smile:

LRA in some way is also global RA but it is a very simplified version of global RA (e.g. LRA does not use conflict graph and its coloring algoritm is closer to priority coloring). LRA does a lot of other very complicated things besides RA, for example instruction selection which is quite specific to GCC machine description. Usually code selection task is a separate pass in other compilers. Generally speaking LRA is more complicated, machine dependent and more buggy than IRA. But fortunately LRA is less complicated than its predecessor so called reload pass.

IRA and LRA names have a long history and they do not reflect correctly the current situation.

It would be possible to incorporate LRA tasks into IRA, but the final RA would be much slower, even more complicated and hard to maintain and the generated code would be not much better. So to improve RA maintainability, RA is divided on two parts solving a bit different tasks. This is a typical engineering approach.

I am debugging by printf to be familiar with LRA and IRA.

* The papers by Briggs and Chaiten contradict[2] themselves when examine the text of the paper vs. the pseudocode provided?

I don't examine Preston Briggs work so thoroughly. So I can not say that is true. Even so it is natural that there are discrepancy in pseudocode and its description especially for such size description.

For me Preston Briggs is famous for his introduction of optimistic coloring.

* Why interference graph is expensive to build[3]?

That is because it might be N^2 algorithm. There are a lot of publications investigating building conflict graphs and its cost in RAs.

And I am practicing[4] to use HEA, developed by Dr. Rhydian Lewis, for LLVM firstly.

When I just started to work on RAs very long ago I used about the same approach: a lot of tiny transformations directed by a cost function and using metaheuristics (I also used tabu search as HEA). Nothing good came out of this.

Thanks for your lesson! But are there some benchmarks when you used Tabu search as HEA, AntCol, etc. such as https://pbs.twimg.com/media/DRD-kxcUMAAxZec.jpg

If you are interesting in RA algorithms and architectures, I'd recommend Michael Matz article

ftp://gcc.gnu.org/pub/gcc/summit/2003/Graph%20Coloring%20Register%20Allocation.pdf

as a start point.

Thanks! I am reading it.

I've read both of these papers many times (in the past) and I don't recall
any contradictions in them. Can you (Dave?) be more specific about what you
think are contradictions?

I do admit that pseudo code in papers can be very terse, to the point that
they don't show all the little details that are needed to actually implement
them, but they definitely shouldn't contradict their written description.
I was very grateful that Preston was more than willing to answer all my many
questions regarding his allocator and the many many details he couldn't
mention in his Ph.D. thesis, let alone a short paper.

Peter

Hi Dr. Rhydian,

I am trying to build Dimacs Graph with VirtReg (pseudo) I could counting G.Nodes and Edges https://github.com/xiangzhai/llvm/blob/avr/lib/CodeGen/RegAllocGraphColoring.cpp#L359

It just translated gCol/HybridEA's inputDimacsGraph to LLVM pseudo registers' traverse, but I am not clear what is Node1 or Node2, is it refer to the Index of vertices?

In the gCol/HybridEA/graph.txt, for example:

e 2 1

Is it means there is an edge between Node2 and Node1? if so, it might be equivalent to LLVM's VirtReg1->overlaps(*VirtReg2)

And follow your mind,
Node1 = 2, Node2 =1;
if (Node1 < 1 || Node1 > G.Nodes || Node2 < 1 || Node2 > G.Nodes) errs() << "Node is out of range\n";
Node1--, Node2--; // Why minus?
if (G[Node1][Node2] == 0) G.Edges++;
G[Node1][Node2] = 1, G[Node2][Node1] = 1;

Please give me some hint, thanks a lot!

Leslie Zhai <lesliezhai@llvm.org.cn> writes:

* Memory (20 - 100 cycles) is expensive than Register (1 cycle), but
it has to spill code when PhysReg is unavailable

As Vladimir said, the cache makes this kind of analysis much more
tricky. It's not necessarily the case that memory=bad and
register=good. Since there are tradeoffs involved, one needs to
evaluate different strategies to determine *when* memory is worse than a
register. It may very well be the case that leaving something in memory
frees up a register for something much more important to use it. All of
the register allocation algorithms try to determine this kind of thing
through various heuristics. Which heuristic is most effective is highly
target-dependent.

In my experience, changing heuristics can swing performance 20% or more
in some cases. Today's processors and memory systems are so complex
that 2nd- and even 3rd-order effects become important.

It is very, very wrong on today's machines to use # of spills as a
metric to determine the "goodness" of an allocation. Determining *what*
to spill is much more important than the raw number of spills. Many
times I have a seen codes generated with more spills perform much better
than the code generated with fewer spills. Almost all of the papers
around the time of Chaiten-Briggs used # of spills as the metric. That
may have been appropriate at that time but take those results with giant
grains of salt today. Of course they are still very good papers for
understanding algorithms and heuristics.

The best way I know to determine what's best for your target is to run a
whole bunch of *real* codes on them, trying different allocation
algorithms and heuristics. It is a lot of work, still worthy of a
Ph.D. even today. Register allocation is not a "solved" problem and
probably never will be as architectures continue to change and become
ever more diverse.

* Folding spill code into instructions, handling register coallescing,
splitting live ranges, doing rematerialization, doing shrink wrapping
are harder than RegAlloc

Again, as Vladimir said, they are all part of register allocation.
Sometimes they are implemented as separate passes but logically they all
contribute work to the task of assigning registers as efficiently as
possible. And all of them use heuristics. Choosing when and when not
to, for example, coalesce can be important. Splitting live ranges is
logically the opposite of coalescing. Theoretically one can "undo" a
bad coalescing decision by re-splitting the live range but it's usually
not that simple as other decisions in the interim can make that tricky
if not impossible. It is a delicate balancing act.

* The papers by Briggs and Chaiten contradict[2] themselves when
examine the text of the paper vs. the pseudocode provided?

As others have said, I don't recall any inconsistencies but that doesn't
mean there aren't bugs and/or incomplete descriptions open to
interpretation. One of my biggest issues with our computer academic
culture is that we do not value repeatability. It is virtually
impossible to take a paper, implement the algorithm as described (or as
best as possible given ambiguity) and obtain the same results. Heck,
half my Ph.D. dissertation was dissecting a published paper, examining
the sensitivity of the described algorithm to various parts of the
described heuristic that were ambiguous. By interpreting the heuristic
description in different ways I observed very different results. I read
papers for the algorithms, heuristics and ideas in them. I pay no
attention to results because in the real world we have to implement the
ideas and test them ourselves to see if they will help us.

Peter is right to point you to Preston. He is very accessible, friendly
and helpful. I had the good fortune to work with him for a few years
and he taught me a lot. He has much real-world experience on codes
actually used in production. That experience is gold.

Good luck to you! You're entering a world that some computing
professionals think is a waste of time because "we already know how to
do that." Those of us in the trenches know better. :slight_smile:

                               -David

Hi David,

Thanks for your teaching!

I am a newbie in compiler area, I only learned Compiler Principle in 2002 https://www.leetcode.cn/2017/12/ilove-compiler-principle.html

But I like to practice and learn :slight_smile: https://github.com/xiangzhai/llvm/blob/avr/lib/CodeGen/RegAllocGraphColoring.cpp#L327 because theory is not always correct, or misunderstood by people, so I want to compare solutionByHEA, IRA, Greedy, PBQP and other algorithms.

Thanks for your lessons to correct my mistakes, such as memory=bad register=good, and I need to find the answer *when* memory is worse than a register, I am maintaining AVR target, there are 32 general registers, 32K flash, 2K sram http://www.atmel.com/Images/Atmel-42735-8-bit-AVR-Microcontroller-ATmega328-328P_Datasheet.pdf so perhaps to MCU, memory might be expensive than register? but what about AMDGPU or VLIW processors? I don't have experienced on them, please teach me.

I am reading LLVM's code SpillXXX, LiveRangeXXX, RegisterCoalescer, etc. to get the whole view of CodeGen.

I am reading Dr. Rhydian Lewis's book: A Guide to Graph Colouring: Algorithms and Applications A Guide to Graph Colouring | SpringerLink and other papers, even if HEA is not the best solution, I still want to practice and see the benchmark, I am not computing professionals, I am only 34 olds, perhaps I have enough time to waste :slight_smile:

Leslie Zhai <lesliezhai@llvm.org.cn> writes:

But I like to practice and learn :slight_smile:
https://github.com/xiangzhai/llvm/blob/avr/lib/CodeGen/RegAllocGraphColoring.cpp#L327because theory is not always correct, or misunderstood by people, so I
want to compare solutionByHEA, IRA, Greedy, PBQP and other algorithms.

That is a very good way to learn. Learn by doing and observing how
results change as parameters vary. You will never stop learning. :slight_smile:

Thanks for your lessons to correct my mistakes, such as memory=bad
register=good, and I need to find the answer *when* memory is worse
than a register, I am maintaining AVR target, there are 32 general
registers, 32K flash, 2K sram
Smart | Connected | Secure | Microchip Technology perhaps to MCU, memory might be expensive than register? but what
about AMDGPU or VLIW processors? I don't have experienced on them,
please teach me.

I do not have much experience with those architectures either. As I
said, the "best" algorithm for register allocation is very
target-dependent. What works well on AVR might work very poorly on a
GPU. The only way to know is to test, test, test. Of course one can
make some educated guesses to narrow the amount of testing. Many times
a "good" allocator is "good enough" on many targets. I work for a
company that tries to squeeze every last bit of performance out of
codes. We're a bit fanatical that way so we try lots of things. Most
places aren't that obsessive. :slight_smile:

I am reading LLVM's code SpillXXX, LiveRangeXXX, RegisterCoalescer,
etc. to get the whole view of CodeGen.

Those are great places to learn about register allocation! They can
also be complicated and a bit daunting. The folks on the LLVM list can
help guide you but you will also do well just making observations,
stepping through with a debugger, etc. I certainly don't claim to
understand all of the nuances in this code. Lots of people have
contributed to it over the years.

I am reading Dr. Rhydian Lewis's book: A Guide to Graph Colouring:
Algorithms and Applications
A Guide to Graph Colouring | SpringerLink and other papers, even
if HEA is not the best solution, I still want to practice and see the
benchmark, I am not computing professionals, I am only 34 olds,
perhaps I have enough time to waste :slight_smile:

I am not familiar with that book but lots of reading will do you well.
There's an endless supply of papers to look at. And practice, practice,
practice. You seem to be on the right track!

                             -David

Hi David,

Thanks for your teaching!

I am a newbie in compiler area, I only learned Compiler Principle in 2002 https://www.leetcode.cn/2017/12/ilove-compiler-principle.html

But I like to practice and learn :slight_smile: https://github.com/xiangzhai/llvm/blob/avr/lib/CodeGen/RegAllocGraphColoring.cpp#L327 because theory is not always correct, or misunderstood by people, so I want to compare solutionByHEA, IRA, Greedy, PBQP and other algorithms.

Just as another tip:

  • Indeed in my experience: Just implementing some algorithms yourself and comparing them against what existing compilers produce and then figuring out why is the best way to learn about allocators.

  • Don’t just put emphasis on all the different different coloring techniques. In practice it is usually the way you deal register constraints and other target specific coloring constraints, rematerialization and how you get coalescing into the picture. Most regalloc papers don’t talk too much about that as it’s highly finicky and often target specific. But they do have a huge impact on allocation quality and can make or break any of those algorithms…

  • Matthias

Hi Matthias,

Thanks for your hint!

It is just for learning and practicing for me, just like migrate DragonEgg http://lists.llvm.org/pipermail/llvm-dev/2017-September/117201.html the motivating is for learning from GCC and LLVM developers.

Hi Leslie,

As others have pointed out, the notion that register allocation is isomorphic to graph coloring is poppycock. There are other important aspects, in particular the placement of spill/fill/copy instructions. The importance of graph coloring relative to spill code placement depends on how many registers you have available. If you are generating code for 32-bit x86 which has only 6-7 general purpose registers, you will have so much spill code and short live ranges that graph coloring doesn’t matter much at all. On the other hand, if you have 32 registers like Chaitin did, you have much less spilling in typical code, and the graph coloring aspect becomes important.

Early compilers would keep each local variable in a stack slot, and the register allocation optimization would literally allocate a whole local variable to a register. The C “register” keyword makes sense in that context. Later improvements like copy coalescing and live range splitting meant that multiple local variables could use the same register and a variable could live in different places at different times. It is sometimes useful to take this development to its logical extreme and look at register allocation as a caching problem: The register allocator’s job is to make sure that values are available to the instructions the need them, using the registers as a cache to get the values there in the most efficient way possible.

Guo, J., Garzarán, M. J., & Padua, D. (2004). The Power of Belady’s Algorithm in Register Allocation for Long Basic Blocks. In Languages and Compilers for Parallel Computing (Vol. 2958, pp. 374–389). Berlin, Heidelberg: Springer Berlin Heidelberg. http://doi.org/10.1007/978-3-540-24644-2_24

Braun, M., & Hack, S. (2009). Register spilling and live-range splitting for SSA-form programs. International Conference on Compiler Construction.

When you look at register allocation that way, the graph coloring aspect almost disappears. The optimum approach is probably somewhere in the middle.

A third important aspect is register constraints on individual instructions. Sometimes you almost need a little constraint solver just to figure out a valid register assignment for a single instruction. Preston Briggs dealt with this in his thesis, but it hasn’t gotten as much attention as graph coloring since.

Pereira, F. Q., & Palsberg, J. (2008). Register allocation by puzzle solving.

Regards,
/jakob

Hi Jakob,

Thanks for your kind response!

My usecase is for AVR and RISCV targets, and I just want to learn and practice HEA in RA, thanks for your sharing.

So, both AVR and RISC-V are fairly register-rich with usually 32. RV32E only has 16, but that’s still a lot better than i386. If you use a lot of 16 bit integers then AVR also only has effectively 16 registers (or a more with a mix of 8 and 16 bit variables). 32 bit integers should be rare in AVR code, but soft float/double variables are common in Arduino code (both are implemented as 32 bits), so you only have room for 8 of those.

RISC-V doesn’t have any hard constraints on something that must go in a certain register, except the usual argument passing/return convention. There is a an advantage to allocating both the data src/dst register and the pointer base register for a load or store from x8-x15 (s0-1,a0-5) as much as possible as this allows the assembler to use a two byte instruction instead of a four byte instruction.

I haven’t look at AVR in detail for a while, but I seem to recall the top six registers make up three 16 bit pointer registers X,Y,Z. Any of them can be used for (C language) *p, *–p, *p++, only Y&Z can be used for p->foo, and only Z can be used for computed jumps (including function link register) or loading constants from program memory. Also the various multiply instructions take their 8 bit operands from any registers but always produce the 16 bit result in r1:r0. Annoying but nowhere near as bad as i386 as r0 and r1 are not used for anything else. The usual ABI makes r0 a clobber-at-will temp. r1 is supposed to be “always zero”, so you need to CLR it after retrieving (or ignoring) the high bits of a multiply result.

Hi Bruce,

Thanks for your sharing!

I am porting GlobalISel to RISCV target[1], the highest priority in the TODO list[2], welcome to contribute to lowRISC, if fixed all the issues, then I could try to implement RegAllocGraphColoring in HEA and write great Machine Schedulers.

[1] https://github.com/lowRISC/riscv-llvm/issues/19
[2] https://github.com/lowRISC/riscv-llvm/issues