Unfortunately this is proving more complex than the simple fix I sent in https://reviews.llvm.org/D105558.
The problem with that fix is that it is not extensible:
- OffsetSizeAndStrideOpInterface does not have enough information to resolve a result dimension:
- for “extract” type op, it is simply the proper value in the sizes operand (what D105558 implements)
- for “insert” type op, it just forwards to the “dest” operand (also what D105558 implements but by special casing)
- special casing is not extensible and the above patch breaks this IREE Flow Op (i.e. core cannot special case and should use an interface)
The proper core interface to use for this seems to be InferTypeOpInterface.
However, folks have recently converged on moving the “canonicalization” of dim(InferTypeOpInterface)
in a separate pass (https://reviews.llvm.org/D104321).
It seems adopting this more generally would mean moving all DimOp canonicalization patterns to this pass (i.e. why should there be some producer ops that canonicalize automatically while some others require a special pass).
This brings me to trying to make ops that conform to OffsetSizeAndStrideOpInterface also conform to InferTypeOpInterface for the purpose of enabling the folding of DimOp.
This is however not straightforward because the result may be any type where 1’s are folded (i.e. if tensor<1x?x1x2xf32>
is a valid return type then so is tensor<1x?x2xf32>
and so is tensor<?x1x2xf32>
or tensor<?x2xf32>
).
Dropping the rank-reducing semantics does not appear like a good option because a combination of ops will be needed to achieve the same effect which will result in special-casing in transformations which is a red flag.
Another simple way would be to extend OffsetSizeAndStrideOpInterface with extra information to describe whether the op is an “insert”-like or an “extract”-like but this seems to go against the spirit of InferTypeOpInterface and https://reviews.llvm.org/D104321.
In any case we should be consistent across our decisions, so my first questions comes from looking at Remove canonicalizer for memref.dim via ShapedTypeOpInterface.
How to we decide for a given op
whether “folding” dim(op.result())
:
- should be a canonicalization (which implies that
op
must not be InferTypeOpInterface
)
- should not be a canonicalization ?
It seems there is an undesired coupling between “op semantics” (i.e. is it an InferTypeOpInterface) and “canonicalization vs pass”.
For example, it is unclear to me why AllocOp
should not be InferTypeOpInterface
.
Lastly, is there a particular recommendation how to address the original problem in this thread, in light of the discussion above?
@_sean_silva @MaheshRavishankar @herhut @stellaraccident