Order of TableGen definitions

I’m building a backend based on LLVM 3.6 (because that was the head when I started) for a target architecture with instructions for copying between registers of different classes and also for copying literal values to registers, e.g. roughly:

TAX X A ; copy from A to X
TXA A X ; copy from X to A
TIX 42 X ; set X to 42

TIA 42 A ; set A to 42

with the instructions .TD contents roughly (omitting instruction formats, etc)

def TAX { ins aRegs:$A, outs xRegs:$X, [(set xRegs:$X, aRegs:$A)]
def TXA { ins xRegs:$X, outs aRegs:$A, [(set aRegs:$A, xRegs:$X)]
def TIX { ins imm:$val, outs xRegs:$X, [(set xRegs:$X, (imm:$val))]

def TIA { ins imm:$val, outs aRegs:$A, [(set aRegs:$A, (imm:$val))]

then when doing instruction selection for a CopyToReg(X, val) node (produced by an IR “ret i8”; calling convention has result in X0 register):

(a) with the immediate instructions TIX+TIA defined first it works, though sub-optimally:

%vreg0 = TIX 42; aRegs:%vreg0
%X0 = COPY %vreg0; aRegs:%vreg0

(b) but when TAX+TAX are defined first it selects this circular pair:

%vreg1 = COPY %vreg0; xRegs:%vreg1 aRegs:%vreg0
%vreg0 = TAX %vreg1; aRegs:%vreg0 xRegs:%vreg1

and subsequently aborts when determining liveness (“Use not jointly dominated by defs”).

Very naively I’m surprised that the order of the rule definitions matters, i.e. that it chooses the inter-register copy at all rather than finding the better-fitting literal copy. Adding cost hints such as isAsCheapAsAMove didn’t change this behaviour.

Is this the expected behaviour, or is defining TIX first just a crude workaround for some deeper problem with my code?

Thanks for whatever light you can shed…


I don’t think register class to register class copy instructions are supposed to have patterns. Those instructions should be selected by the copyPhysReg in your InstrInfo class. I believe your immediate instructions should also be marked as isRematerializable.

As to your question about pattern ordering. All patterns are first put into a list based on declared order and then sorted by various criteria. One of those criteria is a concept of “complexity” which is a rough approximation of how many nodes in the selection dag the pattern will replace. More complex patterns are ordered first. Two patterns with the same complexity will tend to stay in the order they were declared. I believe all of your patterns have the same complexity so they stay in their original order. If you open the DAGISel.inc for your target you can see a line that contains "Complexity = " that shows the complexity for each pattern.

It’s possible the forcibly increase or decrease the complexity by adding a “let AddedComplexity = ” to your instruction definition. This number will be added to the calculated complexity before the sort. I believe it can also be negative to lower the priority though thats way less common. You can find examples in many of the in tree targets. I’m most familiar with X86 and I know its used in many places there.

Hope this helps.