Tensor Layout in MLIR

Hi everybody,

I was looking at this proposal from 2019 that aimed to augment the “tensor” type with memory layout information. It sounds like it would be very useful for any downstream ML compiler and the feedback was almost entirely positive, so I’m curious to know why it didn’t end up implemented.


On a relevant note, the RFC mentions the problems of “layout assignment” and separating "logical dims and physical layout”, I’d really love to know how they are solved in other projects, e.g., in OpenXLA, or any other project. Any pointers to docs, source code, or just any wisdom in general – all will be greatly appreciated.

We went with an attribute named “encoding”: Builtin Dialect - MLIR
Introduced in [mlir] introduce "encoding" attribute to tensor type · llvm/llvm-project@7714b40 · GitHub