[RFC] Interface for destination-style ops

Tied operands are also used for shape reification. For example, for things like dim(result) == dim(tiedOperand(result))

Dst-style ops have nice bufferization properties, but they also work nicely with tiling. When you fuse into extract_slice you also tile the corresponding init. Also, inputs() can be used to find candidate producers to fuse.

I’d also like to understand the merits of the interface outside of the LinalgStructuredInterface context. By just having what moves/stays, it’s hard to gauge if this is bringing over unnecessary technical debt.

I don’t see a reason why it couldn’t be included, if the interfaces are meant to support bufferization. I’d be more concerned about putting a very specific interface in mlir/Interfaces/.

– River

It is bringing more code reuse across dst-style ops. LinalgExt, GmlSt, TensorOps already have dst-style ops. I think introducing this interface actually reduces the technical debt.

It is true that this interface is helpful for bufferization, but it is also tiling and shape reification.

These concepts have shown up in enough places and been independently re-invented enough that I’m willing to presume that there is a missing core interface, but let’s walk it through conceptually a bit step by step.

Order of questions in my mind:

  • Determine which is more general: proposed DestinationStyleOpInterface or some re-spelling of TiedOpInterface. Can one be expressed in terms of the other? Is there a simplification?
  • Run under the assumption that this is definitely useful for any tensor-based op that we want to bufferize and plan to put it in the bufferization dialect.
  • Evaluate the other algorithms/ops which operate at this level and determine if they make sense to be expressed with this interface. Candidates:
    • Tiling
    • Fusion
    • Shape reification
    • Side effects/aliasing
    • SCF ops?
  • Pick the highest spanning point and put the interface there (probably one of bufferization, linalg or lib/Interfaces).

My 2 cents on how to walk this forward.

1 Like

@stellaraccident I took a look at TiedOpInterface and it is more general than DestinationStyleOpInterface in terms of how it handles results-to-operands map. However, DestinationStyleOpInterface imposes a more rigid structure that also will allow to unify printing/parsing and cloning/bufferizing the operations.

I think it is useful for tensor-based ops that we bufferize, but unfortunately, not all of them have this semantics, e.g. tensor.pad, tensor.generate, etc.

Tiling and Fusion use TilingInterface, but I don’t think that it makes any assumptions about the op being in dst-style.

Shape reification does use it.

SCF ops like ForOp are not really dst-style in general case, if they are not resulted from tiling.

I would try to walk this forward in two stages. At first we would create the DestinationStyleOpInterface.td in Linalg dialect. It would be a pretty big refactoring step already, but still nothing controversial. Then we can decide where it can live after Nicolas is back.

I have always objected to IREE’s TiedOpInterface – it is used with different semantics in different places, and the exact meaning of “tied” is context-dependent (it is used in way more places than where you would use the interface proposed here).

I think that something which formalizes the behavior of this specific, useful aspect of linalg ops which allows crossing tensor/memref boundary consistently is super useful though. But I’m opposed to some vague “tied” concept.

Can we print as “outs” for buffers and “inits” for tensors?

No it doesnt. It has a method called getDestinationOperands which I am planning to revisit cause it might not actually be needed. Currently though the method also takes a builder which allows it to generate the SSA value to use as destination if needed. This should work in general (have tested in some cases on tensor operations as well). I am planning to see if we can drop this method completely though. I think it is an artifact of scf.for and destructive updates and not related to tiling and fusion.

Thats a very linalg specific thing to do, but I think we can do this.

Then the interface doesn’t belong to Linalg from the layering perspective. If LinalgExt and GmlSt, both downstream, may choose to depend on Linalg for the purpose of implementing the interface, Tensor should not. (I know that we can avoid the dependency cycle by having an external implementation of the interface for Tensor, but this still sounds hacky).

I don’t mind it being extracted within Linalg first, you gotta start somewhere.


Yes, it can be done.

Just to confirm that we are on the same page:

  1. DestinationStyleOpInterface is needed and nobody objects to that.
  2. inits or outs will be printed depending on whether the op has tensor or buffer semantics
  3. The place, where the interface should live, still has to be decided, but that can be done after the interface is extracted within Linalg.

I’m late to this and don’t have a strong opinion, but the print template for an op changing based on introspecting the types of its values sounds like an annoying complication for little benefit, especially when “inits” for tensors is beneficial to say that it doesn’t have to be the “out” location, but that contradicts the name DestinationStyleOpInterface

+1 - not a strong opinion but I think that inits is fine for both.

+1 as well having consistency between buffer and tensors (but people do get confused by it :slight_smile: )

Great, so we discussed the most important topic then. inits vs outs :slight_smile:

1 Like

I played around with extracting the methods that are marked with [MOVES] into a new interface, like @pifon2a suggested. So one problem with this is that ideally we would like to have interface inheritance here, without it we have a lot of Linalg code that wants to have access to methods from both LinalgStructuredInterface and the (to be created) DestinationStyleOpInterface. But as far as I understand, interface inheritance is not supported right now, and I also don’t know how hard it would be to support it.

So maybe we should just create a new interface, without trying to migrate Linalg to use this interface?

Yeah, I was worried about that.

Do you mean this as a first step or the final state? As a step, “you gotta start somewhere” but I also don’t want linalg doing its own thing at the end.

We could do the new interface now while we are lacking support for interface inheritance. If we want to, we could also hand-write the inheritance by forwarding all the shared operations in the linalg interface to the corresponding implementations in the destination style passing interface. That would avoid code duplication and also make sure the two interfaces keep aligned while interface inheritance is being built. I have not tried this, so don’t know whether that actually works.

I briefly talked to @ftynse about interface inheritance and I guess a first step would be to figure out what exactly it means. There is a static/dynamic verification component to it (ensuring that the interface is actually implemented) but it likely should also mean that the inheriting interface gets (default) implementations that forward to the inherited interface.

But then again I am not fully convinced that interface inheritance is actually a concept we want to have. Another way to solve this could be to modify the linalg code over time to use one or the other interface.