Vectorize Linalg generic Op

I have created a small pass in Toy example that will convert toy::AddOp to linalg::GenericOp. The IR I obtained is given as follows:

module {
  func.func @main() {
    %cst = arith.constant dense<[1.000000e+00, 2.000000e+00, 4.000000e+00, 1.500000e+00]> : tensor<4xf64>
    %cst_0 = arith.constant dense<[1.000000e+00, 2.500000e+00, 4.000000e+00, 1.000000e+00]> : tensor<4xf64>
    %0 = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>, affine_map<(d0) -> (d0)>, affine_map<(d0) -> (d0)>], iterator_types = ["parallel"]} ins(%cst, %cst_0 : tensor<4xf64>, tensor<4xf64>) outs(%cst : tensor<4xf64>) {
    ^bb0(%in: f64, %in_1: f64, %out: f64):
      %1 = arith.addf %in, %in_1 : f64
      linalg.yield %1 : f64
    } -> tensor<4xf64>
    return
  }
}

I now would like to vectorize the operation. So what will be the preferred way ?

  1. Vectorize the linalg.genricOp ?
  2. Vectorize during lower to SCF ?

Any suggestion would be helpful.

You can use the Linalg vectoriser. Here’s a similar example to what you have that demonstrate how to drive the vectoriser using the Transform dialect:

-Andrzej

Thank you for your response.
What I have understood from this example, I have to lower down linalg.genericOp to vector dialect to achieve vectorization. Right ?

Yes :slight_smile: There are other “vectorisers” in MLIR too (e.g. SparseTensor vectoriser), but the one I suggest feels like a perfect match for what you need.

1 Like

Thank you !!!