Hello everyone! I’ve been working with the tensor
dialect’s extract_slice
operation to emulate PyTorch’s slicing behavior. Consider a PyTorch tensor defined as follows:
x = torch.tensor([[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]])
To create a slice of this tensor, I use the slice operation like this:
x[:, :1]
and the expected output is
To replicate this behavior, I’ve tried using the tensor.extract_slice
operation in my MLIR code:
module {
func.func @forward(%arg0: tensor<2x5xi64>) -> tensor<2x1xi64> {
%extracted_slice = tensor.extract_slice %arg0[0, 0] [2, 1] [1, 1] : tensor<2x5xi64> to tensor<2x1xi64>
return %extracted_slice : tensor<2x1xi64>
}
}
After executing the above code using ExecutionEngine
in MLIR’s Python binding, the result I obtained was:
Unfortunately, this output does not match the one produced by PyTorch.
I think the reason may be that the slicing logic of tensor.extract_slice
is different from PyTorch’s. Although the doc here mentions that tensor.extract_slice
“extract a tensor from another tensor as specified by the operation’s offsets, sizes and strides arguments”, it doesn’t provide detailed information about the underlying logic for slice extraction (like the order of elements extracting).
Could anyone offer insights into the slicing logic used by tensor.extract_slice
or solution to the two mismatch results? Any guidance or suggestions would be greatly appreciated!