In the context of lowering shape computations, we have come across the problem that we now allow tensor<index>
but not memref<index>
.
As another example, the HLO dialect uses tensor<index>
to specify shapes in the case of dynamic operations. When we want to lower these to buffer form, we would expect to have memref<index>
that represents those shapes. Even if we would defer lowering the shape vectors to buffer form, we have to do so before we get to LLVM, as they might need a memory representation if their size is not known.
The same holds true for lowering shape computations in the shape dialect.
The rationale document says
While it may be useful to have
memref<?xindex>
to express indirect accesses, e.g. sparse matrix manipulations or lookup tables, it creates problems MLIR is not ready to address yet. MLIR needs to internally store constants of aggregate types and emit code operating on values of those types, which are subject to target-specific size and alignment constraints. Since MLIR does not have a target description mechanism at the moment, it cannot reliably emit such code. Moreover, some platforms may not support vectors of type equivalent toindex
.
I think this is no longer true. We have a mechanism to describe the size of index when lowering to LLVM and we need to be able to represent constants due to the tensor type already.
I would like to lift this restriction and allow memref of index type and convert them, in the LLVM case, equivalently to a memref of integers of the bitwidth that was configured.This would be consistent with how we treat index
in the conversion otherwise.