Looking at for example Tensor_ConcatOp:
def Tensor_ConcatOp : Tensor_Op<"concat",
[Pure,
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
DeclareOpInterfaceMethods<ReifyRankedShapedTypeOpInterface>]>
all of those traits end up being part of the CRTP pattern in the class:
class ConcatOp : public ::mlir::Op<ConcatOp, ::mlir::OpTrait::ZeroRegions, ::mlir::OpTrait::OneResult, ::mlir::OpTrait::OneTypedResult<::mlir::RankedTensorType>::Impl, ::mlir::OpTrait::ZeroSuccessors, ::mlir::OpTrait::VariadicOperands, ::mlir::OpTrait::OpInvariants, ::mlir::BytecodeOpInterface::Trait, ::mlir::ConditionallySpeculatable::Trait, ::mlir::OpTrait::AlwaysSpeculatableImplTrait, ::mlir::MemoryEffectOpInterface::Trait, ::mlir::OpAsmOpInterface::Trait, ::mlir::ReifyRankedShapedTypeOpInterface::Trait> {
I wonder if creating a trait could be a way to approximate polymorphism? It could certainly be a way to check if a given Type is one of my composite types (either generic or specialised).
I’ve not played much with traits in this way, I believe I could statically cast to my trait type, and then call my methods of interest (e.g., getSize()).
Tangentially related, but as I’m working on this, I wonder if given how many new FP types there are, doing something similar might make sense. (see discussion here: Rethink on approach to low precision FP types). We are introducing loads of funky variants with different numbers of mantissa and exponent bits, and then NaN rules and whatnot. I can see that being parametrised in a similar way.