# [RFC] Add `tensor.from_shape` operation

[RFC] Add tensor.from_shape operation.

Background

The concept of destination-style ops became important for bufferization and tiling.

At the moment DestinationStyleOpInterface is located in Linalg dialect.
If we want the concept of DPS ops to become a first-class citizen of MLIR, we should move it to mlir/Interfaces.

In that case, if someone wants to use DPS ops, they would need a value that can be passed to inits/outs arguments. The only operation that can produce an uninitialized tensor is defined in Linalg dialect, which might introduce unwanted dependencies.

@matthias-springer introduced bufferization.alloc_tensor operation which has side effects. It is inserted in preparation for one-shot bufferization pass. There is no alternative to linalg.init_tensor.

The only similar operation in TensorOps is tensor.generate, but it would be quite misleading to use it to create a tensor that is defined only by its shape.

Previously on linalg.init_tensor series:

RFC: Promoting linalg.init_tensor to the bufferize dialect

PR: Define a linalg.init_tensor operation.

Proposal

Move linalg.init_tensor to tensor.from_shape to unblock implementation of TilingInterface outside of Linalg by removing dependency on LinalgDialect. Also, the name linalg.init_tensor was confusing since it does not actually initialize any elements of the tensor.

@nicolasvasilache @ftynse @herhut @_sean_silva @MaheshRavishankar @matthias-springer @stellaraccident

I’ll obnoxiously cite myself from the previous discussion:

so it makes sense to me to move the op to improve the layering.

There is a longer-term problem of the tensor type living in the builtin dialect while all “common” operations including the proposed from_shape living in the tensor dialect, which leads clients of the destination-passing style to depend on the tensor dialect, but it is a rather independent problem from what is being proposed.

Moving the op seems useful to me to improve layering. So +1.

+1 for moving this op.

Nit: tensor.from_shape initially suggests that the operands are “shape”. I personally prefer numpy.empty. So I’d suggest tensor.empty (when in doubt look at NumPy :stuck_out_tongue: ). Not a strong opinion though.

1 Like

+1 for tensor.empty. Let’s call it like that.

+1 works for me, thanks for making progress on this!

+1.

The sparse tensor dialect introduced sparse_tensor.init a while back for exactly this purpose, since there was no tensor equivalent op and the linalg op felt out of place. Since then, the sparse op has been replaced with bufferization.alloc_tensor, but having a higher level version of this feels right. Please use part of the documentation for sparse tensors in the new op.

+1, this will allow us to simplify TilingInterface, who’s implementation of non destination-style ops (e.g., tensor.pad) currently depends on Linalg (because it needs to create a linalg.init_tensor).