Confusion in understanding Pat class

What does the first argument :(brcond (i32 (CondOp GPR:$rs1, GPR:$rs2)), bb:$imm12) to Pat in the following code would reduce to?

Pat<(brcond (i32 (CondOp GPR:$rs1, GPR:$rs2)), bb:$imm12),
           (InstBcc GPR:$rs2, GPR:$rs1, bb:$imm12)>;

I can’t pretend to understand SelectionDAG but the classes you want are in llvm/include/llvm/Target/

Look for class Pat< and def brcond.

If you want to see the “compiled” tablegen before it goes to the backend, you can run llvm-tblgen on it. That’d be something like:

./bin/llvm-tblgen <the file>

That will give you the raw records, then search for the relevant names. You will likely need to add include paths which can be done by adding -I, I think, check --help for the actual answer.

1 Like

This pattern describes a conditional branch. The first argument is the conditional value. The second argument is the address you will jump to if the conditional value is not 0.

During the match, $rs1 is the first argument of CondOp, $rs2 the second, and $imm12 is the address.

The second part of the pattern is the instruction (or DAG of instructions ; but here it’s just a single one) the pattern gets reduced to. Here it is InstBcc with $rs2 as first operand, $rs1 as second operand, $imm12 as third operand.

GPR and bb are operand types - they should be defined somewhere in the backend you are looking at.

1 Like

Thanks for reply.
What about i32? What role is it playing here?

What about i32? What role is it playing here?

My understanding is, class Pat wants a dag node, which is the entire (brcond (i32 (CondOp GPR:$rs1, GPR:$rs2)), bb:$imm12) in this example.

For (i32 (CondOp GPR:$rs1, GPR:$rs2)), i32 is the operator (a ValueType is an operator) of the dag node (i32 (CondOp GPR:$rs1, GPR:$rs2).

1   TableGen Programmer’s Reference — LLVM 16.0.0git documentation describes the format of dag nodes.

There is no implicit cast in TableGen - so the i32 operator is the cast to i32. My guess is that CondOp output type is i64 while brcond probably expects an i32 - actually, you can try removing that i32 and compiling; if you see a “type mismatch” error, then it will confirm that this cast is necessary

i32 stands for 32-bit integer, i64 for 64-bit integer. You will see i8, i16, i32 and i64 most commonly.

1 Like

I believe the problem is that all the setX nodes have an output type of any integer type (including vectors), and brcond has the same for its first argument, which means that they’re unconstrained without an explicit cast. In reality, targets choose which type gets used for condition codes by overriding getSetCCResultType (defaults to an AS0 pointer-sized integer), and so the backend’s patterns need to cast to the matching type.

1 Like

Where did you get that kind of knowledge from?

The nodes are defined in llvm/include/llvm/Target/ and both (SDTSetCC and SDTBrcond are the type profiles used for setcc and brcond, and seteq etc are just nice PatFrag wrappers around setcc that you can see later in the file) use SDTCisInt<0> (0th operand, which is the first output for setcc, but the first input for brcond as it has no outputs; the first two arguments to SDTypeProfile are the number of outputs and number of inputs respectively).

1 Like