Scf.whileOp type inference with dynamic shapes

Hi All

I am trying to add control flow to my dialect.
All ops in our dialect support InferTypeOpInterface and we use it to infer types.

My problem is the following, in the cases where the loop ends up with a growing shapes like doing concat inside of it how is type inference supposed to work. I have a few questions about it.

  1. Considering UnrankedTensors as Type for loop arguments to be “unknown/undeterminate dynamic shape” ok? As opposed to unknown rank but static shape (not different for each iteration).
  2. Are there any plans, discussions or pointers on how to handle dynamically growing (or reducing) shapes inside of scf dialect?
  3. Any plans to implement InferTypeOpInterface on scf dialect, which recursively calls the interface on loop bodies and determines if shape can be inferred or leave it alone as Unranked?

I have to build the blocks upfront to determine resultTypes and then pass them to the whileOp and splice the ops from the block before as described in a previous post:

Below is an example of scf::whileOp wrong Type inference occuring if I just use standard builder methods naively.

func @main() {
  %0 = "toy.placeholder"() {name = "toy_placeholder"} : () -> tensor<1x1x2x2xf32>
  %1 = "toy.constant"() {value = dense<1> : tensor<i32>} : () -> tensor<i32>
  %2 = "toy.constant"() {value = dense<5> : tensor<i32>} : () -> tensor<i32>
  %3 = "toy.constant"() {value = dense<0> : tensor<i32>} : () -> tensor<i32>
  %4:2 = scf.while (%arg0 = %0, %arg1 = %3) : (tensor<1x1x2x2xf32>, tensor<i32>) -> (tensor<1x1x2x2xf32>, tensor<i32>) {
    %5 = "toy.less"(%arg1, %2) : (tensor<i32>, tensor<i32>) -> tensor<i1>
    scf.condition(%5) %arg0, %arg1 : tensor<1x1x2x2xf32>, tensor<i32>
  } do {
  ^bb0(%arg0: tensor<1x1x2x2xf32>, %arg1: tensor<i32>):  // no predecessors
    %5 = "toy.constant"() {value = dense<0> : tensor<i32>} : () -> tensor<i32>
    %6 = "toy.concat"(%arg0, %0, %5) : (tensor<1x1x2x2xf32>, tensor<1x1x2x2xf32>, tensor<i32>) -> tensor<2x1x2x2xf32>
    %7 = "toy.add"(%1, %arg1) : (tensor<i32>, tensor<i32>) -> tensor<i32>
    scf.yield %6, %7 : tensor<2x1x2x2xf32>, tensor<i32>

And of course verifier does the correct thing and I get:
error: 'scf.while' op along control flow edge from Region #1 to Region #0: source type #0 'tensor<2x1x2x2xf32>' should match input type #0 'tensor<1x1x2x2xf32>'

What might need to happen (throwing speghetti on the wall) is possibly building the bodies of blocks with assumption of unranked inputs then we need to rerun the TypeInference pass on the block to determine if shapes don’t change for source type then we can keep the inferred types else we need to leave them alone as unrankedTypes to be resolved by further lowering, transformations or punt it to runtime to deal with it.

Which brings me back to my questions above and I’d like to hear everyone’s thoughts on anything discussed in the community along how to resolve such type inference.