[RFC] tensor.pack and tensor.unpack

I dont see why the tensor.pack operation is “a memory related abstraction”. It is literally taking a tensor that is nD, and converting it to a high dimensional tensor by re-organizing the elements of that tensor. You now have a new tensor. In tensor world this has copy semantics. So I dont see any “memory related abstraction” here. The fact that you would also like to do this on memrefs is a separate issue.

That seems like it is mixing concerns. I dont see a harm in having a tensor that you annotate through some encoding that it needs to go into shared memory. The only thing you need to account for is during bufferization reduce the amount of memory used in practice. memref and tensor have different function. Eventually everything has to be mapped to memref cause you need explicit loads/stores, but staying in tensor land avoids having to do complex alias analysis to ensure that your transformations are correct. (I know cause I implemented tile and fuse using Linalg ops on memrefs and it was a nightmare. I am happy I removed it link).

To echo Stella’s point from earlier: The tensor based code-generation approach is one way of generating code (one that has been used effectively by downstream users like IREE), but is not the only way to generate code. There is no harm in having multiple tools in the toolbox.

I dont understand the actions at a distance part. There is no action at a distance with pack and unpack operations. In any case, this is being built up and used in IREE and will be tested on real models. I’d be happy to share our findings at that point.

3 Likes