Open Project : Inter-procedural Register Allocation [GSoC 2016]

Hi Vivek

Yes.

I do not know if there is a paper on this as this is quite trivial, but IIRC Open64 register allocator does that.
Anyhow, the algo is:
Given a call graph SCC

  • Allocate the function with no calls or where each callee has been allocated
  • Propagate the clobbered registers to the callers of that function by updating the related regmasks on the callsites.
    Repeat until no more candidate.

Right direction overall. The simplest approach to this is feasible within a summer and should definitely give you good results when you have cases of hot calls with many spill/fills around it that could be eliminated.

One does not necessarily need the call graph. The compiler can do this as an opportunistic optimization. The callee collects a resource mask and the caller consumes it when it is “there”. Within a module when the callee”leaf” is compiled before the caller the information is “there”. When the call graph is available you want a bottom up walk for this optimization.

A few things to keep an eye on:

  • The twist here could be that the bottom up order conflicts with the layout order, so the two optimizations would have to run independently. ( I have not looked into the layout algorithm so this might not be an actual issue here).

Layout is just the order functions reach the AsmPrinter, so you’re right that this is going to make the function output different. If we care about the order, which we may do, then we’d need to cache the data in the AsmPrinter and reorder it there somehow.

Pete Cooper Do you mean to cache function order related data in AsmPrinter ?

Yeah, exactly. So if the module has [foo, bar] in that order, but you compile them as [bar, foo] because of SCC, then you may want to somehow reorder them during the AsmPrinter so that they are emitted as [foo, bar] again.

Of course this shouldn’t matter (I can’t think of a case where it would matter), but for ease of debugging at least, it is nice to have functions emitted in the same order as they are in the IR.

Some bonus features that come from codegen on the calligraphy, and specifically having accurate regmasks and similar information:

  • The X86 VZeroUpper pass should insert fewer VZeroUpper instructions before calls, and could possibly even learn that after the call the state of vzeroupper is known.
  • Values in registers can be used by the callee instead of loading them.

The second one here is fun. Imagine this pseudo code:

foo:
r0 = 1000

ret

bar:

call foo
vreg1 = vreg2 + 1000

You know which registers contain which values after the call to foo. In this case you know that the value of 1000 is available in a register already so you can avoid loading it for use in the add. You could have other values in registers too, even those which are passed in to foo. The ‘this’ pointer is the best example as its probably incredibly likely that r0 contains the this pointer after a function call which didn’t override r0 for the return.

The above mentioned case is interesting and useful, perhaps and simple analysis pass which can return a map from value to register will help.

Yeah, I think it could be interesting. Of course one of the interesting things is decided when its more profitable to not use the map. You would not, for example, choose to reserve a register containing a constant for a long time as it would almost always be cheaper to just regenerate the constant when needed. But a constant used very soon after a call may still be useful.

The this pointer example is actually related to what Quentin mentioned as a future direction here: rewriting calling conventions. If you have

int A::foo() {
return this->value;
}

then you are going to have code something like

foo:
r0 = load r0, #offset_of_value
ret

If the this pointer is live after the call, and it almost certainly is, then it would be better to rewrite this call to avoid clobbering r0. That is, return the this pointer in r0 and the value in r1. That could actually be done as an IR level pass too though if its deemed profitable.

Anyway, didn’t mean to distract from the immediate goals of this project. I’m excited to see the SCC code make it in tree and see what else it enables.

One more, just for fun: Inter-procedural stack allocation. That is of calls bar, bar needs 4 bytes of stack space. Instead of bar allocating 4 bytes, it adds an attribute to itself, then foo allocates 4 bytes of space at the bottom of the stack for bar to use.

Can you please provide some links to understand benefits of IP stack allocation ?

I actually don’t have any links. Its just something I thought about implementing a while ago. The main benefits I can think of are saving code size and performance as ‘bar’ in my example would not contain any stack manipulation code.

I have also write the draft proposal, I will share it through the summer of code site.
Here is the link https://docs.google.com/document/d/1DrsaFJdtxV73Zpns2bEgjATLFcWuaYMPHuvt5THLeLk/edit?usp=sharing
This is not much effective but still I would like to give it a try. Please review it quickly I have 23 hours to submit the final PDF.

I just read it. It looks good to me, although i’m not a register allocator or SCC expert, so hopefully others will have good feedback for you.

Thanks
Pete

This is something that can be performed with a module pass at the IR level right? The codegen would just have to be teached to recognize the attribute.

Yes.

I do not know if there is a paper on this as this is quite trivial, but IIRC Open64 register allocator does that.
Anyhow, the algo is:
Given a call graph SCC

  • Allocate the function with no calls or where each callee has been allocated
  • Propagate the clobbered registers to the callers of that function by updating the related regmasks on the callsites.
    Repeat until no more candidate.

Right direction overall. The simplest approach to this is feasible within a summer and should definitely give you good results when you have cases of hot calls with many spill/fills around it that could be eliminated.

One does not necessarily need the call graph. The compiler can do this as an opportunistic optimization. The callee collects a resource mask and the caller consumes it when it is “there”. Within a module when the callee”leaf” is compiled before the caller the information is “there”. When the call graph is available you want a bottom up walk for this optimization.

A few things to keep an eye on:

  • The twist here could be that the bottom up order conflicts with the layout order, so the two optimizations would have to run independently. ( I have not looked into the layout algorithm so this might not be an actual issue here).

Don’t we have the linker reorganizing the layout?
Or is your comment just targeting “section based” object file without -ffunction-section?

  • You also need to consider the supported preemption model. When a function can be preempted dynamically the statically collected information for a callee cannot be used and the optimization may not kick in.

We could only do it on private/internal function anyway, which are not interposable, unless I missed something?

  • Most of the work I would expect to be tuning the assignment heuristics in the allocator (a live range that spans two calls sites, should it go into a scratch register that is not used in one call but in the other? How could profile change that? etc). But again, perhaps the cheapest approach is not to go into the heuristics and only remove a scratch register fill/spill around a call sit when that register is not destroyed anywhere down in the call tree.

How these calls would be different than any other instruction that clobber a (set of) fixed register(s)? I’d expect it should already be handled (albeit maybe not tuned) by the current infrastructure.

One more, just for fun: Inter-procedural stack allocation. That is of calls bar, bar needs 4 bytes of stack space. Instead of bar allocating 4 bytes, it adds an attribute to itself, then foo allocates 4 bytes of space at the bottom of the stack for bar to use.

This is something that can be performed with a module pass at the IR level right? The codegen would just have to be teached to recognize the attribute.

I thought you would need to run codegen to get a specific number of bytes to allocate. You would compile bar, note down how many bytes of stack it would have required, then add that as an attribute. The IR level could only make a good guess as to how many bytes we need.

Saying that, this is basically like having a compiler controlled redzone. Thats what made me think of it in the first place. If bar needed only 4 bytes, and the system supports a redzone, then its likely bar wouldn’t have allocated anything on the stack. I just extended that so that the number of bytes is able to be larger that the number rezone’s typically provide.

Cheers,
Pete

Here is the patch that I believe achieve the first point.

CGCCC.diff (1.57 KB)

OK make sense. Thanks.

Hello Mehdi,

   could I ask you if you could share your patch?
We are testing whether at least some leaf call register allocation optimization could help and you patch or some pointers on what to do will be very useful.

Thank you
   Adam

Hi,

Here is the patch (I already sent it in this thread):

CGCCC.diff (1.57 KB)