LLVM Scheduler and Itinieraries: Negative latency?

Hello,

While trying to create back end in LLVM I have stumbled upon a problem I have trouble to get past, hopefully someone can give me hints on what I am doing wrong. The problem is that the assertion in the file ScheduleDAGList.cpp row 187 is triggered: "Negative latency". How does this happen?

As background:
My target has one issue unit, therefore my Schedule.td file only contain one functional unit.
My instruction itineraries are defined to all take 1 machine cycle to complete (my target is fully pipelined) but with values 2 and 3 specifying when the result is ready (not all instructions have forwarded results) and 2 as parameter for when the operands are read.

mvh (kind regards)
  -Magnus

Hello Magnus,

My instruction itineraries are defined to all take 1 machine cycle to
complete (my target is fully pipelined) but with values 2 and 3
specifying when the result is ready (not all instructions have
forwarded results) and 2 as parameter for when the operands are read.

Does this mean that your instruction always have single cycle delay
slot? The latency right now is definitely negative, since you
specified that the instruction takes single cycle, but operands are
read on the second cycle.

Forwarding to llvm-dev...

Hello Magnus,

I am trying to model a fairly simple five stage pipelined processor.

Ok.

The problem is that some instructions need the last stage (write back) to be
finished so the correct operand is selected for a following instruction in
stage 3.

Ok, this is pretty typical.

machine cycles and higher values for when the result is ready (3) and when
the operands are referenced (2) are showed. Do you have any hints on what I
am doing wrong, since the example seem to show that this "negative latency"
should work?

So, in your case the operands are read in the second cycle, then
several cycles the instruction occupies the functional unit and after
this at some cycle the results are written back. Is this correct?

If yes, then I think you should model stuff with two stages. First
stage will be "issue" with 1 cycle latency Second stage will be the
actual "execution unit", for which you can have, say, 1 cycle duration
of the stage and operands read in the beginning of the stage and
available at, say, 2 cycle from the beginning of the stage.

You might want to look into ARM backend as an example of target with
rather complex scheduling.