Best way to model transpose op "permutation"

I can imagine some cases where one is code generating to parameterized hardware based on a hardware config file that one could have a zero-element vector and it could make sense.

However, by that token though I don’t see why we should prohibit vectors with dynamic dims (for things like ARM SVE). And I also don’t see why we would disallow index inside vectors, or custom types, considering again the symbiotic hardware generation / code generation use cases (e.g. why should we prohibit a custom hardware flow from supporting a quantized type or their own special numerical type directly in a vector register?).

Once we lift those restrictions, there is no difference between a VectorType and a static-shaped TensorType. FWIW, I’ve always found the distinction between the two somewhat arbitrary. Also, I didn’t see any comments in the Rationale about the distinction between VectorType and TensorType: https://mlir.llvm.org/docs/Rationale/

Code links indicating the invariants for VectorType:
VectorType::verifyConstructionInvariants
VectorType::isValidElementType

I think I remember when we extended tensor to handle zero element cases. I can’t remember if there was a principled reason to not make this legal for vector. I’ve run into this several times and wish vector<0xi32> was legal.

I think @jpienaar answered this question in detail upthread. Tensors are more abstract while vectors are expected to map to things that the hardware has. 0-element vectors are thus unrealistic. Is there a situation where you find yourself needing 0-element vectors but can’t use tensor types for?

I’m actually ambivalent as to whether vector is the right type to use for the OP. But I have run into cases where I have wanted to use it to represent a physical set of values across an ABI boundary where there are zero length cases that can crop up during transformation, and the transformations are easier to write if the type is allowed to shrink to zero length prior to being removed entirely in a later phase (typically, transformations which change the arity of signatures are hard to comingle with others). Having types with constraints that make them hard to transform reduces their utility or makes the transforms needlessly complicated.

For example, the conversion framework can apply transformations which produce intermediately invalid IR as long as the final form is valid. But since this constraint on the type is an assert, it cannot receive the same treatment.

On a related note, the scenario you refer to happens in common for memrefs where one ends with a 0 sized dimension for a memref in a part of the code that is never executed. This is common for memrefs since they could be dynamically shaped - but vectors aren’t. The scenario I’m referring to here is of vectorization where say a memref<?xf32> is cast to a memref<?xvector<8xf32>> where the new ‘?’ is the old one floordiv 8. The generated code uses an if/else structure to separate out the full tiles and partial tiles. If the ‘?’ is never known at compile time, the shape stays dynamic, but if the symbol %N associated with the dynamic dimension is say specialized and set to a constant say 6, the shape folding pattern will turn the memref for the full tile vectorized code version into a memref<0xvector<8xf32>>. But then the conditional guard will fold into an ‘always false’ and the ‘then’ part gets deleted later. This works today because we allow 0 in a memref’s shape.

OTOH, vector types are statically shaped and so the situation of ?s simplifying to 0 will not arise - so there is a less of a motivation to allow 0. On top of that, you anyway have tensor types to model 0 sized 1-d value arrays or memrefs if you are dealing with memory.

I’m willing for the conclusion to be that vectors are a highly specific type. For context, I hit this edge a couple of months ago and (as a result) approached the problem differently – and my main feedback was that the constraint and the specificity of the type was surprising (which is the same feedback on this thread and may signify a naming or documentation bug).

The case I was trying to model was relating to wanting to represent a set of descriptors (effectively a virtual register of i32 values which get transported to the device in some way that may require splitting and repacking to match physical “register” constraints which vary by target) for holding the combined dynamic dimensions crossing an invocation boundary. There is a version of the invocation that takes descriptors and one that doesn’t, and in the dynamic shape case you use the former. For generality, you can always start with the more dynamic version and canonicalize it later to the static version, but if using vector to model this (which in my mind made some degree of sense, even though this isn’t a physical hardware register), you run into the type fragility in the degenerate case. In my mind, this usage was much closer to being a “vector register” than what I’m used to thinking of a tensor as representing, and I reached for the vector type to represent it, discovering later the degenerate case fragility when the program crashed on the assert.

My main point is that degenerate cases tend to come up in unexpected ways and it is useful to have a type system that is robust to them, even if those degenerate forms don’t ever materialize in nature. But I also understand wanting a very specific thing and not having that baggage. In that case, it may be useful to think about how we could help make it clear to future people that this is a very specific type for a specific level of the problem, which it’s current placement and definition does not really do: MLIR covers a very large span and having the very high level and very low level types as they are with overly generic names has led to confusion.

Let me turn this around: is there a situation where you find yourself needing vector but can’t use tensor types for? TensorType seems to be structurally a superset of VectorType, so the reverse question doesn’t make sense. It seems like VectorType doesn’t really contribute that much.