Graph region are limited to a single block, where there is no implicit “control flow” defined by the textual order of the operations. The execution order is defined by the enclosing operation.
There is no restriction on operations having regions or not inside a graph region.
I can’t really understand the analogy right now unfortunately.
cannot have blocks within the entry block. There is no specific reason for this as stated in the documentation.
allows cycle in the control flow graph.
is designed to express structure. For example, in GPU programming, we send script to GPU for parallel programming. Another example, is expressing circuit design, which can have a cycle.
Thus, order of operations within the region does not matter.
is like a flag to an operation, that this operation may have a cycle.
This isn’t exact. Operations can have as many regions as they want. But all of those regions should be of the same kind.
This isn’t exact either. SSA CFG regions may and do have cycles, otherwise one would be unable to express a basic loop construct. What they can’t have are cycles between operations, i.e., an operation A cannot use as operand the result of an operation B if B also uses the result of A as an operand (going through basic block arguments break this cycle). This is not limited to a single block.
Function calls are a different kind of control flow, this example is irrelevant.
This makes no sense. Blocks cannot have blocks (directly) in them. Graph regions must have at most one block though.
There isn’t really a control flow graph in graph regions, as opposed to CFG regions where CFG stands for Control Flow Graph. Control flow graph exists between blocks.
This isn’t true. We don’t send a script to GPU, but a compiled program. There are multiple kinds of concurrent on GPUs (between threads, warps, and blocks + asynchronous copies and tensor operations). None of them are expressed using graph regions in upstream MLIR dialects as far as I know.
Thank you, Alex, for the clarification.
I was going to edit summary above, but it seems like editing is disabled, so I will put fixed part here.
SSA Dominance Rule
SSA value is reachable in the nested block.
Code
func.func @f() -> () {
// entry block
%1 = arith.constant 2026 : i32 // SSA Value
%zero = arith.constant 0 : i32
%condition = arith.cmpi sgt, %1, %zero : i32
cf.cond_br %condition, ^block1, ^block2
^block1:
%0 = arith.addi %1, %1 : i32 // entry block is on the path to block1, so the block can use %1
func.return
^block2:
%2 = arith.addi %1, %1 : i32 // entry block is on the path to block2, so the block can use %1
func.return
}
Indeed, only moderators or highly trusted users may edit posts.
https://en.wikipedia.org/wiki/Dominator\_(graph_theory) . Dominance means an operation is encountered on every control flow path leading to the current operation. There are related concepts of reaching definitions and available values/expressions. You can find these and more in any decent compiler textbook.
It is directed. The arrows may point from uses to definitions or vice versa, depending on the interpretation. It just allows for cycles.
There is simply no need for multiple blocks in graph regions. Blocks serve to break use-def cycles via block arguments (also known as phi nodes). Cycles are allowed in graph regions.