I am trying to convert high level `toy::AddOp`

to add op in `Linalg`

. I have found two representations in `Linalg`

.

One is [RFC] TOSA-to-Linalg lowering of element-wise ops

```
#map = affine_map<(d0, d1) -> (d0, d1)>
module {
func.func @main(%arg0: tensor<3x5xf32>, %arg1: tensor<3x5xf32>) -> tensor<?x?xf32> {
%0 = tensor.empty() : tensor<3x5xf32>
%1 = linalg.generic {indexing_maps = [#map, #map, #map], iterator_types = ["parallel", "parallel"]} ins(%arg0, %arg1 : tensor<3x5xf32>, tensor<3x5xf32>) outs(%0 : tensor<3x5xf32>) {
^bb0(%in: f32, %in_0: f32, %out: f32):
%2 = arith.addf %in, %in_0 : f32
linalg.yield %2 : f32
} -> tensor<3x5xf32>
%cast = tensor.cast %1 : tensor<3x5xf32> to tensor<?x?xf32>
return %cast : tensor<?x?xf32>
}
}
```

another is 'linalg' Dialect - MLIR

```
linalg::AddOp
```

I could not find any discussion about the pros & cons of either approach. My question is, to lower the high level `toy`

*language* to `llvm IR`

via `Linalg`

, which approach will be more appropriate?