Thank you for respond

I mean not target vector dimensions but source memref dimensions.

In these examples I transfer_read/transfer_write floats one by one from one 2x2 memref to another. I am lowering these examples with `mlir-opt --convert-vector-to-llvm`

command.

Note:

Forgot to say, that I faced this issue when I tried to apply `CodegenStrategy`

(tiling+promotion+vectorization) to `linalg::MatmulOp`

(2D case) and `linalg::BatchMatmulOp`

(3D case).

2D case:

```
#map0 = affine_map<(d0, d1)[s0] -> (d0 * 2 + s0 + d1)>
module {
func @transfer_read_2d(%arg0: memref<2x2xf32>, %arg1: memref<2x2xf32>) {
%c0 = constant 0 : index
%c1 = constant 1 : index
%c2 = constant 2 : index
%cst = constant 0.000000e+00 : f32
scf.for %arg2 = %c0 to %c2 step %c1 {
scf.for %arg3 = %c0 to %c2 step %c1 {
%0 = memref.subview %arg0[%arg2, %arg3] [1, 1] [1, 1] : memref<2x2xf32> to memref<1x1xf32, #map0>
%1 = vector.transfer_read %0[%c0, %c0], %cst {in_bounds = [true]} : memref<1x1xf32, #map0>, vector<1xf32>
%2 = memref.subview %arg1[%arg2, %arg3] [1, 1] [1, 1] : memref<2x2xf32> to memref<1x1xf32, #map0>
vector.transfer_write %1, %2[%c0, %c0] {in_bounds = [true]} : vector<1xf32>, memref<1x1xf32, #map0>
}
}
return
}
}
```

3D case:

```
#map0 = affine_map<(d0, d1, d2)[s0] -> (d0 * 4 + s0 + d1 * 2 + d2)>
module {
func @transfer_read_3d(%arg0: memref<1x2x2xf32>, %arg1: memref<1x2x2xf32>) {
%c0 = constant 0 : index
%c1 = constant 1 : index
%c2 = constant 2 : index
%cst = constant 0.000000e+00 : f32
scf.for %arg2 = %c0 to %c2 step %c1 {
scf.for %arg3 = %c0 to %c2 step %c1 {
%0 = memref.subview %arg0[0, %arg2, %arg3] [1, 1, 1] [1, 1, 1] : memref<1x2x2xf32> to memref<1x1x1xf32, #map0>
%1 = vector.transfer_read %0[%c0, %c0, %c0], %cst {in_bounds = [true]} : memref<1x1x1xf32, #map0>, vector<1xf32>
%2 = memref.subview %arg1[0, %arg2, %arg3] [1, 1, 1] [1, 1, 1] : memref<1x2x2xf32> to memref<1x1x1xf32, #map0>
vector.transfer_write %1, %2[%c0, %c0, %c0] {in_bounds = [true]} : vector<1xf32>, memref<1x1x1xf32, #map0>
}
}
return
}
}
```