The Memref Type has a restriction on element type both in MemRefType and TypeParser. Should we consider to relax the constraint to allow non built-in element type?
One thing to consider (raised by @ftynse) is that “we need to know the size of the elemental type for almost any operation on a memref”. We should have a type interface so that the non built-in type is required to implement in order to be suitable for a memref.
I’m +1 on a Type Interface for reporting physical size. The quantized types would all satisfy the constraints of such an interface if it existed (since they maintain an explicit “storage type” that has to be a primitive int or float).
Memref is also restricted in the specification, and this is more important to address that relaxing the parser rule. (Incidentally, the restriction on tensor types in the specification does not correspond to the code anymore). I would like to have a clear definition of the contract a type must respect in order to be a memref element. Size is the thing that we will hit immediately, but it feels like we need a more general formulation, like “a type that is intended to be stored in memory”. For example, unit or void types don’t make sense as memref elements conceptually, but they can report themselves as being of zero size just fine.
Otherwise, I am generally supportive of making memrefs more flexible. This should not be specific to quanttype to avoid built-in types depending on dialect types.
If there is something like a type trait, we could allow those that have say a MemRefElement trait as a valid memref elemental type; then, we can make such types implement a getSize that returns a strictly positive value if that’s what we want.
To define the contract, perhaps we could define a memref element type as either:
a primitive type
a tuple of valid memref element types
a type T implementing MemRefElementTypeInterface, which has the following 3 methods:
A. getStorageType: returns a type P, which is a valid memref element type.
B. packIntoStorageType: materializes IR that, given a value of type T, returns a value of type P.
C. unpackFromStorageType: materialize IR that, given a value of type P, returns a value of type T.
This should compose recursively nicely. And the infra only needs to be extended to support tuples as memref element types. (theoretically, we could just use arbitrarily sized integers and bit manipulations to extract, but it seems superior to just support tuples).
The function that computes the element storage size is something like:
def storageSize(t: Type):
if isPrimitiveType(t):
return getPrimitiveTypeSize(t)
if isTupleType(t):
return sum(storageSize(elementType) for elementType in t)
if implementsMemRefElementTypeInterface(t):
return storageSize(t.getStorageType())
return null
packIntoStorageType/unpackFromStorageType allow nice compositions with ops like generic_atomic_rmw which need to bottom out on primitive types, but otherwise could allow arbitrary calculations within them.
This looks great to me as far as I can understand. Minor comment: I think for 1), you also meant to include vector of primitive type. Similarly, it’s not fully clear how vectors interact with 3).
I’d rather just keep the current list of allowed built-in types: integers, floats and vectors thereof, and add any type that has the interface.
Tuples are a separate discussion because they don’t have underlying memory semantics, and likely need layout information associated in order to compute their size.
It’s not only the size, we will likely have to handle layout and packing. E.g. is tuple<i3, i5> one byte because it bit-packs, two bytes because it does not but there is no layout-imposed padding, or eight bytes because something requires all elements to be aligned at four bytes? Then again, tuples can contain non-built-in types, for which it’s even messier.