Do we have infer type in mlir

I have defined an op like this:

def AddOp : Armory_Op<"add", [SameOperandsAndResultType]> {
  let summary = "element-wise addition operation";
  let description = [{
    The "add" operation performs element-wise addition between two tensors.
    The shapes of the tensor operands are expected to match.
  }];

  let arguments = (ins TensorOrMemRef:$lhs, TensorOrMemRef:$rhs);
  let results = (outs TensorOrMemRef:$result);
  let assemblyFormat = "$lhs $rhs attr-dict `:` type($result)";
}

I was hoping to use it like this:

func.func @test_add(%arg0 : tensor<5x6xf32>, %arg1 : tensor<5x6xf32>) -> tensor<*xf32> {
  %0 = armory.add %arg0 %arg1 {} : ?
  return %0 : tensor<*xf32>
}

or:

func.func @test_add1(%arg0 : tensor<5x6xf32>, %arg1 : tensor<5x6xf32>) -> tensor<*xf32> {
  %0 = "armory.add"(%arg0, %arg1) : (tensor<5x6xf32>, tensor<5x6xf32>) -> tensor<*xf32>
  return %0 : tensor<*xf32>
}

But none of them works. Dose any one know how should I use it. I wise to infer result types automatically. So, we can generating test easily.

%0 = armory.add %arg0 %arg1 {} : ?

The ? isn’t a valid type, have you tried passing a type here?

Of course the issue is that your function should also return a tensor<5x6xf32>…

You defined your ODS with SameOperandsAndResultType which is likely very strict, you may want to try SameOperandsAndResultElementType instead?
See also the InferTypeOpInterface.

Ultimately this can’t solve everything you may be looking for: unlike a programming language the IR is in general design so that each operation can be parsed in isolation, which requires enough information locally to decide on the types.

1 Like

@mehdi_amini Yes, it is alright if I use the exact output type. But, is it possible to use the Trait to infer type/shape? I think there is enough information. Instead of complaining, we may take a look if the inferred type is more strict than the exist one. If so, we replace the type with the inferred type. Is this better?

You can infer the result type if you have the type of the operands.

Sure, but that isn’t what the trait you chose said. The interface allows to express this.

This isn’t a desired behavior for the syntactic IR: now the in-memory representation may not round-trip exactly anymore.
Also how do you know that this won’t cause an issue for any user of the produced value? (for example the return must match the function declared return type)

We have a in-house project that is built on mlir. Previously, we implemented infer shape interfaces and pass. Then, we can write some cases like this:

func.func @test_add1(%arg0 : tensor<5x6xf32>, %arg1 : tensor<5x6xf32>) -> tensor<*xf32> {
  %0 = "armory.add"(%arg0, %arg1) : (tensor<5x6xf32>, tensor<5x6xf32>) -> tensor<*xf32>
  return %0 : tensor<*xf32>
}

We are looking for something similar but simpler, e.g. automatically infer type or shape without implementing interface function. It seems that I was misusing the trait. Do we have any existing example on this? Thx.

In the td, we said the type after “:” is the result type, i.e. to be inferred, i think it should be something unknown. The input type is not shown on this operation but it is actually known. Then, I guess we can refine the unknown type by known knowledge. Is it sounds ok?

It is unclear: do you want to be able to infer the type while parsing? Do you want to be able to run type inference post parsing and are asking if there is a pass upstream that does that?

With the trait you have selected (and with including the type inference .td file) you would get

  • build time type inference (e.g., builder.create<foo>(...) need not specify type) as the inference method is generated (this is currently a very very limited autogeneration that I’d really like to be expanded);
  • and verification that these match ;

So in your original question the first example doesn’t work as we don’t have a way in the MLIR assemblyFormat or generic syntax to specify what you want. But you can actually do this with a custom parser (e.g., you invoke the build time type inference method). Mehdi was trying to point out why one may not want to do so for ease of debugging etc.

While in your original question the second example you’d run into the verifier failing. Now you could of course run a type inference pass in between parsing and everything else (and as long as you don’t use verify-after-all, you’d be able to get to valid IR state in that case). We don’t have a general MLIR type inference pass upstream though - its been a bit lagging and downstream folks have often rather special cased inferences. It would also be a nice addition.

Thanks for your advice. I think we may need to go this way. How about rewriting isCompatibleReturnTypes for each ops (actually the same function for all ops since we outputs tensor all the time) to allow refinement as compatible. Then, we can write a pass to refine/infer the type.

After reading the code for InferTypeOpInterface.td, I start to know why this cannot be done automatically in upstream. We may have other types, e.g. int8, int16, or custom types. It is hard to say what is compatible in general.

You can rewrite isCompatible per op on the infer trait, but SameOperandsAndResultType has an additional verifier. Initially it did verify compatible but that got changed as that trait is also used where strict equality is needed (same being strictly equal vs not was point of consideration). One can easily do the same as you get here with OpTraitList though, look at the infer tensortype one.

1 Like

Oh, I forgot to modify the return operand type and function result type. :slight_smile: