We currently are in the process of adding a new Tensor Type. The reason for this is that we need to hold specific information associated with the Tensor e.g. Compression Ratio, Hardware Specific information etc…
To this end a separate Tensor Type seemed appropriate similar to RankedTensorType and UnRankedTensorType.
The input to our compiler is a Standard Dialect such as the TFLite Dialect. The operators defined in this dialect expect the tensors to be one of the TensorTypes [Ranked, UnRanked] and the dialect enforces this in the verification stages. As such we have to implement a subsequent legalization step to an internal dialect which replicates all the Ops but with our own new Tensor Type.
My question would be. Rather than having to define a whole new dialect and legalization pass has it ever been considered to allow dynamic attributes to be added to Types ?
This would simplify things as we could easily add the needed metadata to the existing Tensor Types rather than having to maintain a separate dialect, legalization pass and internal TensorType.
The desire of an open type system (e.g. bring your own TensorType) has been brought up several times. I am also super interested in it. I think the direction is to make the TensorType hierarchy TypeInferface’s, so dialects can customize it while still reuse the standard interface when the customized stuffs don’t matter. I am not sure how far we are right now though.
This is where the conversation was left. I think we do have enough facilities implemented now to do it, but it is not an insubstantial amount of work to do it right. I’d be interested in finding a way to do it that requires as minimal as possible changes to downstream projects which use these types heavily, but I haven’t taken the time to try to come up with a proposal along those lines or looked seriously at how. It probably needs a prototype that we can look at and try to tweak onto an implementation path that doesn’t break everything all at once.
We’ve had talks about these for a bit over a year on and off again (the talk at ODM on layout in tensor was last July already and in the elementwise discussion this also came up), but it hasn’t been pushed enough. Now the “adding a layout” one had some unresolved issues wrt API & usability. Just having a generic Attribute there called layout could have been an option for the how to add it but not the how the use it and what one could do with it.
Verification and composition is hard here I feel. If I have a plain add in my dialect that takes TensorType, do I then only verify that only known attributes are used for my op? Or do I accept/we require any unknown attributes as not changing the legality of my operation in some way? (And then it is also OK for anything to drop it). And same for optimization passes, e.g., constant folding on a add in my dialect would now need to first check to see if it can propagate the information in some way that the resultant type is valid (else every constant folding drops layout assignment say).