In my code, I want to get a linalg.BatchMatmulOp.
···
output_shape = [1, 13, 13]
tensor_type = ir.RankedTensorType.get(output_shape, ir.F32Type.get())
op1 = tensor.EmptyOp([1, 13, 13], ir.F32Type.get())
op2 = tensor.EmptyOp([1, 13, 13], ir.F32Type.get())
op3 = tensor.EmptyOp([1, 13, 13], ir.F32Type.get())
op4 = linalg.BatchMatmulOp([op1.result, op2.result], [op3.result], [tensor_type])
···
But I get a wrong result.
···
%3 = “linalg.batch_matmul”(%0, %1, %2) ({
}) {linalg.memoized_indexing_maps = [affine_map<(d0, d1, d2, d3) → (d0, d1, d3)>, affine_map<(d0, d1, d2, d3) → (d0, d3, d2)>, affine_map<(d0, d1, d2, d3) → (d0, d1, d2)>], operand_segment_sizes = array<i32: 2, 1>} : (tensor<1x13x13xf32>, tensor<1x13x13xf32>, tensor<1x13x13xf32>) → tensor<1x13x13xf32>
···
Is there something wrong in my code? Thank you for your help.
Instead of using the op class itself you probably want to use the dsl instead, like this:
from mlir.dialects import linalg
...
op4 = linalg.batch_matmul(op1.result, op2.result, outs=[op3.result])