How to write slicing op in linalg::genericOp

I’m trying to write a slicing operation using Linalg::genericOp.
Ex: a = b[:, :2]

and, I wrote a linalg::genericOp like:

       %4 = linalg.init_tensor [8, 1, 2] : tensor<8x1x2xf32>
       %5 = linalg.generic {indexing_maps = [affine_map<(d0, d1, d2) -> (d0, d1, d2)>, affine_map<(d0, d1, d2) -> (d0, d1, d2)>], iterator_types = ["parallel",       "parallel", "parallel"]} ins(%3 : tensor<8x1x4xf32>) outs(%4 : tensor<8x1x2xf32>) {
       ^bb0(%arg2: f32, %arg3: f32):  // no predecessors
         linalg.yield %arg2 : f32
       } -> tensor<8x1x2xf32>```

But, after lowering to affine, the loop became:

     %2 = memref.alloc() : memref<8x1x2xf32>
     affine.for %arg2 = 0 to 8 {
       affine.for %arg3 = 0 to 4 {
         %46 = affine.load %1[%arg2, 0, %arg3] : memref<8x1x4xf32>
         affine.store %46, %2[%arg2, 0, %arg3] : memref<8x1x2xf32>
       }
     }

the %arg3 in affine is 4. but, in my understanding, the arg3 should be 2.
Am I missing anything in the linalg::genericOp?
Or linalg::genericOp only supports inputs and outputs with same shape?

A linalg.generic iterates all elements of the input and output tensors you provide and it assumes the sizes of input and output tensor dimensions that map to the same iteration dimension match. In your example, the iteration dimension d2 maps the tensor sizes 2 and 4 which is invalid. When lowering to loops, the lowering takes the first shape associated to d2 which is 4. As a result, the smaller tensor is accessed out of bounds.

You are thus right that inputs and outputs that map to the same iteration dimensions need to have the same shape!

Your problem can be solved with a tensor.extract_slice operation. The following code should work:

%a = tensor.extract_slice %b[0,0,0] [8,1,2] [1,1,1]: tensor<8x1x4xf32> to tensor<8x1x2xf32>

Thanks for the answer.

the lowering takes the first shape associated to d2 which is 4.

I found the code in LinalgInterfaces.cpp.

Actually, in my pipeline, the genericOp is used for elementwise-fuse.
Is there any way to “fuse” tensor.extract_slice with other “for-loop”?

I think it is not possible to fuse GenericOp → ExtractSliceOp → GenericOp on the Linalg level. It may be possible to fuse on the Affine level though (not sure about that).

Element-wise fusion can only fuse GenericOps that share the same iteration domain since the result of fusion is again a GenericOp which can only represent a perfectly nested loop nest.

Thanks again~