We brainstormed a bit about this in a group, and here are some thoughts:
-
Implicit propagation of attribute can be unsafe. The dialect which defines the attribute isn’t always in a good position to decide if an attribute can be propagated. For example information about aliasing can be invalidated by a pass, but the dialect callback wouldn’t be able to know if it is safe to propagate. A loop transformation can turn a parallel loop into a sequential one by introducing reuse across iteration, but if an attribute carrying the result of a previous analysis on the loop is propagated over, it would be incorrect.
It seems that the number of situations where an attribute can be propagated this way is fairly limited.
The current mechanism works in context like TensorFlow where you would perform graph transformations and stay within the same dialect, but it wouldn’t scale much further beyond this.
LLVM Metadata can always be dropped, but more importantly they are explicitly propagated only when a pass understands the metadata. The important part is that the transformation semantics is taken into account in the decision of propagating or not. This is critically missing from the current proposal where the dialect hook can only decide to propagate an attribute based on the newly created op without more context. -
Another aspect is that of merging attributes, which this proposal does not address at all.
When matching a few ops and rewriting them, it is too limiting to just take one of the ops as reference for the propagation: we need to make a decision about how to merge the attributes defined on these multiple ops and how to propagate to the set of newly created ops. -
Finally there are things like function parameters attributes where the dictionary of attribute is nested and the current proposed interface can’t handle implicitly.
We may propose to continue to exchange on this topic on Thursday during the ODM meeting? In the meantime feel free to fuel the discussion here.