Is the fast register allocator O(n^2) in the size of the function?

Hi,

I am the maintainer of an open source compiler that makes use of the
LLVM toolchain to perform code generation. A user of our compiler, who
is working on a proprietary project, recently shared with me an
example where our toolchain was invoking `llc -O0` on a file with a
rather large function in it, and the register allocation pass was
taking upwards of 5 minutes. This seemed unusual to me, so I
downloaded the debug symbols for LLVM (this was with LLVM 11.0.0), and
I found that slightly over 50% of the runtime of LLC was spent within
a very small loop within llvm::MachineRegisterInfo::hasOneNonDBGUse. I
believe this loop to correspond to the while loop in
llvm::MachineRegisterInfo::defusechain_iterator::advance.

It looks like this function is just scanning through all the defs and
uses of a register in the function until it finds one that it
considers satisfactory? That seems like it would be introducing
behavior that is quadratic in the size of the function to me. Am I
missing something? Is there some other, more sensible reason why the
profile seems so dependent on this one loop of code? Did I
misunderstand something?

My end goal here would be to submit a patch that might optimize this
case, since it seems to me something that might be able to be computed
more efficiently. But I don't understand the code or the algorithm
hugely well, so I was hoping someone could give me some pointers on
where to get started with something like this.

Does anyone have any suggestions on what I could do to make register
allocation run faster on quite large functions?

Thanks,

+Lang who might still have some register allocation knowledge kicking around (but his responses may be a bit delayed)

Generally there’s a bunch of algorithms that don’t scale really well on especially large function - and I’d encourage users to modify any code generators to chunk up functions, or they’ll be chasing down these sort of issues indefinitely, really.

But improvements are certainly appreciated - though I don’t have any particular knowledge or pointers regarding how to improve the register allocator. (my gut reaction would be that probably a lot of people have looked and haven’t found better tradeoffs - but I’m a bit of a pessimist, fresh eyes can often help :slight_smile: )

Hi Dwight,

Can your share the profile (stack trace for instance) you’ve observed?

As far as I remember the fast regalloc (the one that runs at O0 by default) shouldn’t call hasOneNonDBGUse and the only potential quadratic behavior in that allocator is when we scan for the live out (RegAllocFast::mayLiveOut) and even that part shouldn’t cause significant problems because we limit the number of checks by an upper bound of 8 per variable (hard coded limit).

If RegAllocFast::mayLiveOut turns out to be problematic, we could start to cache its results.

Anyhow, I’ll need more information to be able to help you.

Cheers,
-Quentin