[RFC] std elementwise ops on tensors

Both of the proposals look great to me - thanks for writing these in such detail. A couple of comments.

  1. The way the list of conditions in (2) is written, it would look like any op that doesn’t support the tensor type would trivially satisfy all of those. Would you want that?

But the op is still elementwise right? CSE will get blocked by the op’s side effects. (Minor: I assume you meant another dialect op since addf doesn’t work on memrefs - perhaps lmhlo.add? But that doesn’t return anything.)

In that response, I was referring to expanding std.addf to memrefs, which would entail opening the door to side effects on those ops (which as you said, we probably don’t want to do).

I think you are right that a carefully defined expansion of the trait could maybe cover lmhlo.add (that is, ops that operate “elementwise” but do so via out-params). As you suggest, differentiating “results” from “operands” sounds like the hardest part of that.

In that light, we probably need to name this trait MapsElementwise to somehow indicate that it refers to the fact that these ops can be viewed as scalar ops that map onto larger data types, rather than some abstract “elementwise” annotation.

Good catch! We need to add another “systematic vectorizatoin/tensorization” axiom something like: “all operands and results of a scalar version of the op can be replaced with vectors/tensors of the corresponding element types and same shape, and the op remains valid”.

Or, another way to look at this is that MapsElementwise ops are fundamentally scalar ops, but that the MapsElementwise trait indicates how their semantics generalize to vectors/tensors.

I’ve uploaded a new patch which brings this all together with a new ElementwiseMappable trait: https://reviews.llvm.org/D90731

The std-to-linalg patch from the OP has been update to use the new ElementwiseMappable trait: https://reviews.llvm.org/D90354

Please take a look. I think once we get some more signal on this thread, that patches should be pretty much ready to submit.

LGTM overall, seems like a good incremental step, and regardless of how we evolve this in the future we will at least get more mileage with this. Thanks @_sean_silva!