Machine Scheduler on Power PC: Latency Limit and Register Pressure

Hi,

I’ve been looking at the Machine Scheduler on Power PC. I am looking only at the pre-RA machine scheduler and I am running it in the default bi-directional mode (so, both top down and bottom up queues are considered). I’ve come across an example where the scheduler picks a poor ordering for the instructions which results in very high register pressure which results in spills. The problem comes from the fact that the Machine Scheduler uses a maximum latency limit when it considers instructions to schedule. A high latency instruction will not be scheduled before all of the available lower latency instructions are scheduled. This happens regardless of register pressure since the higher latency instruction is not even added to the “Available” queue that is used when the heuristics pick an instruction to schedule next.

My question is: Why do we have that latency limit in the first place? If an instruction can be scheduled (ie all the instructions it depends on are already scheduled) shouldn’t it be at least considered?

The example is listed below:

test.c

Yes, I’ve run into the problem myself that the Pending queue isn’t even checked with the tryCandidate() logic and so takes priority over all other scheduling decisions.

I personally would be open to changes in this area. To start the brainstorming I could imagine that we move nodes below a target specific limit into the available queue instead of just when they hit their latency cycle limits. And then let tryCandidate() weight the remaining cycles against other scheduling criteria.

Also keep in mind:

  • When making those changes be careful getting cycle bumping/simulation logic right

  • The pending queue is also used as a mechanism to keep compile time (see ReadyListLimit) in check (as checking every candidate for every instruction we schedule is O(n**2)).

  • Matthias

Yes, I’ve run into the problem myself that the Pending queue isn’t even checked with the tryCandidate() logic and so takes priority over all other scheduling decisions.

I personally would be open to changes in this area. To start the brainstorming I could imagine that we move nodes below a target specific limit into the available queue instead of just when they hit their latency cycle limits. And then let tryCandidate() weight the remaining cycles against other scheduling criteria.

Also keep in mind:

  • When making those changes be careful getting cycle bumping/simulation logic right

  • The pending queue is also used as a mechanism to keep compile time (see ReadyListLimit) in check (as checking every candidate for every instruction we schedule is O(n**2)).

  • Matthias

No, it doesn’t make any sense to schedule something from the pending queue if there are still instructions that can be scheduled in the current cycle. Those available instructions can effectively be scheduled “for free”. Scheduling them first won’t delay anything in the pending queue!

If that’s not the case then there’s something wrong with your model. Or if this kind of VLIW-style scheduling isn’t what you wanted, then you shouldn’t have ordered it:

Set MicroOpBufferSize=1 instead:

// MicroOpBufferSize is the number of micro-ops that the processor may buffer
// for out-of-order execution.
//
// “0” means operations that are not ready in this cycle are not considered
// for scheduling (they go in the pending queue). Latency is paramount. This
// may be more efficient if many instructions are pending in a schedule.
//
// “1” means all instructions are considered for scheduling regardless of
// whether they are ready in this cycle. Latency still causes issue stalls,
// but we balance those stalls against other heuristics.
//
// “> 1” means the processor is out-of-order. This is a machine independent
// estimate of highly machine specific characteristics such as the register
// renaming pool and reorder buffer.

unsigned MicroOpBufferSize;
static const unsigned DefaultMicroOpBufferSize = 0;

Yes, I’ve run into the problem myself that the Pending queue isn’t even checked with the tryCandidate() logic and so takes priority over all other scheduling decisions.

I personally would be open to changes in this area. To start the brainstorming I could imagine that we move nodes below a target specific limit into the available queue instead of just when they hit their latency cycle limits. And then let tryCandidate() weight the remaining cycles against other scheduling criteria.

Also keep in mind:

  • When making those changes be careful getting cycle bumping/simulation logic right

  • The pending queue is also used as a mechanism to keep compile time (see ReadyListLimit) in check (as checking every candidate for every instruction we schedule is O(n**2)).

  • Matthias

No, it doesn’t make any sense to schedule something from the pending queue if there are still instructions that can be scheduled in the current cycle. Those available instructions can effectively be scheduled “for free”. Scheduling them first won’t delay anything in the pending queue!

If that’s not the case then there’s something wrong with your model. Or if this kind of VLIW-style scheduling isn’t what you wanted, then you shouldn’t have ordered it:

Though if you happen to increase register pressure resulting in extra spills and reloads then these instructions aren’t really free anymore (as you will get extra cycles later when the spill/reloads are inserted).

This may or may not be a good thing to schedule for, would be interesting to at least experiment a bit and see if it would improve things.

  • Matthias

Yes, of course. But the original problem description was that high latency operations weren’t scheduled soon enough.
-Andy

Hi Andrew,

Actually, my main concern was the spill. My original text was probably not very clear. Sorry about the confusion.

What I wanted to point out was exactly what Matthias mentioned in his first email. The Pending queue is not considered in tryCandidate() and so we can end up with a large number of similar instructions that significantly increase register pressure. In my example it was loads. The scheduler will take all the loads and schedule them together because they all have the same latency and so are all in the Available queue. I agree that in theory that is a good idea however in our case each load uses up another register and we can technically end up with more consecutive loads than total available physical registers. If we considered register pressure we would start by putting all the loads together but then after a few loads we would be forced to use a div to release some of the register pressure.

Having said that, I tried your suggestion of setting
MicroOpBufferSize=1
and it certainly seems to improve things. (At least for the example I was looking at.) Thank you for mentioning it.
I’ll run a few more tests with that parameter change and hopefully that will be enough to get what we want.

Matthias, Andrew, thank you both for the help.

Stefan