I was just having a similar thought last night in response to thinking through Mehdi’s probing on “what about encoding”.
I’m +1 on this as an end goal. Ultimately, I believe that we should have a stable, high level programming model/op set at its own layer of the stack. I don’t think we’re ready to take this apart yet, though. And if we were, I would want to spend a bit of time asking the question whether this is just a move or if we are reworking it along the way. Given that such an op set is destined to expand without limit, we might be better off planning for it and doing something that is more space and compile time efficient than generating full (highly redundant) c++ code for everything. Right now, the source of truth for all of that is a relatively simple yaml file, which then gets turned into something that takes a long time to compile. It seems like there may be a better way.
My understanding of the encoding is that everywhere a tensor
would be used, the tensor:abstract
would also be a valid use. I’d rather not do that. I think it would be better to be more explicit about where abstract_tensor
is valid (instead of default valid everywhere). For example, for the transformations (like say tiling, which is probably the main thing that needs to be addressed), explicit handling of abstract_tensor
to convert it to a real tensor is better IMO than it happening automatically. Will force us to think about the where a real tensor is instantiated (for now using linalg.init_tensor
).
I think adding such an attribute would require us to tighten that up and be more explicit. Adding a type to the shaped type hierarchy is not free from this perspective either and will require an audit and tightening of constraints.
I’m lost right now: what does “adding an abstract
encoding on the tensor type” mean?
Is it a type/attribute added to the tensor type? If so would we need an abstract_tensor type or would this abstract
encoding be itself a type?
I don’t quite get how it makes it transparently used everywhere in the infrastructure either?
I dont see any major objections here. The main thing to start this work is to decide whether the move from ShapedType
to an Type inference happens before adding the abstract_tensor
type or not. @River707 I was under the impression that you already have a WIP patch for this. If this is the case, I can wait for your changes to land to add this type.
Just to clarify, I am planning to add this as a separate type and not as an encoding attribute on tensor
type. I want to actually have no transformations work with abstract_tensor
(at least to start with) and make the bridge from abstract_tensor
to tensor
explicit where needed.