A recent change upstream under the following commit introduces the logic to infer the result type based on a number of parameters such as those used by tensor.InsertSliceOp
[mlir][ods] Generate inferReturnTypes for ops with TypesMatchWith
@Mogball , it seems you’re the person who worked on the commit that added type inference for operations with the TypesMatchWith trait. I hope you don’t mind us checking with you for confirmation that this is an actual issue?
For a quick introduction, @muneeb and I are both compiler engineers in Huawei Cambridge (UK) working with MLIR, and we noticed this issue recently after a rebase of our internal fork of LLVM against upstream.
The problem seems to be related to the fact that some operands of InsertSliceOp (offsets, sizes and strides) are lists of Value, while the type inference method (inferReturnTypes) actually expects each operand to be just a single Value, hence failing with a type error when called:
TypeError: inferReturnTypes(): incompatible function arguments. The following argument types are supported:
1. (self: mlir._mlir_libs._mlir.ir.InferTypeOpInterface, operands: Optional[List[mlir._mlir_libs._mlir.ir.Value]] = None, attributes: Optional[mlir._mlir_libs._mlir.ir.Attribute] = None, regions: Optional[List[mlir._mlir_libs._mlir.ir.Region]] = None, context: mlir.ir.Context = None, loc: mlir.ir.Location = None) -> List[mlir._mlir_libs._mlir.ir.Type]
The python bindings stops requiring the type if the result type can be inferred which means as we refine type inference or improve modelling python code breaks (we have a discussion about this and alternative proposed in âš™ D126238 [mlir][python] Move results types to keyword-only arg. ). In this case you should just omit the result type.
The issue is not that we’re passing too many parameters to the Operation, it is that even by omitting the result type (as it is already done in the example posted in the original post) the call to the method that does the type inference (introduced by [mlir][ods] Generate inferReturnTypes for ops with TypesMatchWith ) fails
I asked about variadics and told “If the op has variadic results, type inference is not generated”, seems we need to restrict on operands too for typesmatchwith (I think we need to change the inference method to use adaptors to be able to handle variadics better).
Interesting. Our in-house hacky fix was to add code that filters out every operand that is not of type Value before calling the type inference function (since these operands are not involved in the inference of the output type anyway) but it felt it was far from an acceptable long-term solution.
Ouch, ok let’s see if we can fix this And obviously we need more tests here upstream. I’m OOO but will try to restrict this for now/make it more conservative and then we can expand support with tests.
defines the input to be a vector of values, but for variadics one can have lists of lists, which is why in
one has the unpacking, and note in the generated code both the inferReturnTypes and build_generic take the same operands as input.
The change made just exposed this but was not the cause. Do you mind filing a github issue for this? (I did not check if my original proposal would work around this, but it would only be a workaround).