The doc says `sizes: tensor-rank number of sizes which specify the sizes of the result tensor type.`

which sounds like the sizes of the `dest`

tensor. However, my understanding is that this is wrong and it should be the `source`

tensor.

Even for `source`

tensor, it’s not clear to me, if it means the `slice`

shape or the `source`

tensor shape. There is a difference between these two if the strides are not equal to 1. Like if the source tensor is 1x4 and stride is 1x3, should the `sizes`

be 1x2 or it should still be 1x4?

I got the answer from another channel. Summarizing here:

The sizes is the shape of the source tensor and the entire source tensor space is inserted into the dest at the given offsets. We only consider stride == 1 at the moment. The source tensor is of `sizes`

and it would extract/insert from/into a size * stride subregion and it is the user reponsibility that offset + size * stride of the small tensor fits within the large tensor.

IR like this would be invalid because %arg0: tensor<1x1x3xf32> has to match the [1,1,2] in `%init[0, 0, 0] [1, 1, 2]`

as the entire source tensor space is inserted

```
builtin.func @f(%arg0: tensor<1x1x3xf32>) -> tensor<1x1x2xf32> {
%init = linalg.init_tensor [1, 1, 2] : tensor<1x1x2xf32>
%ret = tensor.insert_slice %arg0 into %init[0, 0, 0] [1, 1, 2] [1, 1, 1] : tensor<1x1x3xf32> into tensor<1x1x2xf32>
return %ret : tensor<1x1x2xf32>
}
```

The `size`

is indeed redundant information in this particular case but there is also a notion of rank-reducing version of these ops where you can drop 1’s from the list. If you have rank-reduced tensors the non-1 dimensions match the destination tensor and missing dimensions should all have sizes 1. Being explicit in all the cases avoids more surprises.

3 Likes