[RFC] Introduce the concept of IR listeners in MLIR

I worked on a solution to be able to track all IR mutations by attaching “listeners” to an Operation directly. At the moment we built some infrastructure for listeners around the Rewriter (and relevant subclasses). This allows to keep track of most IR mutations, but require the clients to ensure that all mutation are going through the rewriter API: any direct mutation doing through APIs on the Operation class or updating an OpOperand in place won’t be caught.

Here is the first draft implementation: ⚙ D142291 Introduce the concept of IR listeners in MLIR

The Listener interface is expected to grow with more hooks (for example modifying attributes…), but at the moment I implemented support for these:

/// This class attaches to an operation and provides a mechanism to listen to IR
/// modifications. The listeners are notified when an operation is inserted,
/// detached, destroyed, moved, or operands are updated.
class IRListener : public llvm::RefCountedBase<IRListener> {
public:
  virtual ~IRListener() = default;
  // This method is called when the listener is attached to an operation.
  virtual void attachToOperation(Operation *op) {}
  // This method is called when an operation is inserted into a block. The oldBlock is nullptr is the operation wasn't previously in a block.
  virtual void notifyOpInserted(Operation *op, Block *oldBlock,
                                Block *newBlock) {}
                                // This method is called when the operation is detached from a block.
  virtual void notifyOpDetached(Operation *op) {}
  // This method is called when the operation is about to be destroyed.
  virtual void notifyOpDestroyed(Operation *op) {}
  // This method is called when the operation is moved.
  virtual void notifyOpMoved(Operation *op) {}
  // This method is called when an operand is updated.
  virtual void notifyOpOperandChanged(OpOperand &operand, Value newValue) {}
  // This method is called when a block operand is updated.
  virtual void notifyBlockOperandChanged(BlockOperand &operand,
                                         Block *newBlock) {}
};

To keep track of changes, a client would derive this interface and implement the desired hooks.
To showcase this with an example, a -test-ir-listeners is included in the patch, it installs the following listener on the current IR unit (and all nested operations):

class Listener final : public IRListener {
public:
  ~Listener() final {
    llvm::errs() << "===== IRListener Trace =====\n";
    llvm::errs() << traceStream.str();
    llvm::errs() << "===== End IRListener Trace =====\n";
  }
  void printOp(Operation *op) {
    op->print(traceStream, OpPrintingFlags()
                               .elideLargeElementsAttrs()
                               .printGenericOpForm()
                               .assumeVerified()
                               .useLocalScope()
                               .elideRegions());
  }
  void notifyOpInserted(Operation *op, Block *oldBlock, Block *newBlock) final {
    traceStream << "Op inserted: ";
    printOp(op);
    traceStream << "\n";
  }
  void notifyOpDetached(Operation *op) final {
    traceStream << "Op detached: ";
    printOp(op);
    traceStream << "\n";
  }
  void notifyOpDestroyed(Operation *op) final {
    traceStream << "Op destroyed: ";
    printOp(op);
    traceStream << "\n";
  }
  void notifyOpMoved(Operation *op) final {
    traceStream << "Op moved: ";
    printOp(op);
    traceStream << "\n";
  }
  void notifyOpOperandChanged(OpOperand &operand, Value newValue) final {
    traceStream << "OpOperand #" << operand.getOperandNumber()
                << " changed on Operation ";
    printOp(operand.getOwner());
    traceStream << " to " << newValue << "\n";
  }
  void notifyBlockOperandChanged(BlockOperand &operand, Block *newBlock) final {
  }

private:
  std::string trace;
  llvm::raw_string_ostream traceStream{trace};
};

This listener will be destroyed only when the IR is destroyed and print a trace of every event notified, see the test case listener.mlir:

$ ./bin/mlir-opt  -test-ir-listeners -canonicalize   ../mlir/test/IR/listener.mlir
module {
  func.func @andOfExtSI(%arg0: i8, %arg1: i8) -> i64 {
    %0 = arith.andi %arg0, %arg1 : i8
    %1 = arith.extsi %0 : i8 to i64
    return %1 : i64
  }
}

===== IRListener Trace =====
Op inserted: <<UNKNOWN SSA VALUE>> = "arith.andi"(%arg0, %arg1) : (i8, i8) -> i8
Op inserted: <<UNKNOWN SSA VALUE>> = "arith.extsi"(%2) : (i8) -> i64
OpOperand #0 changed on Operation "func.return"(%4) : (i64) -> () to %3 = arith.extsi %2 : i8 to i64
Op detached: %4 = "arith.andi"(%0, %1) : (i64, i64) -> i64
Op destroyed: %0 = "arith.andi"(<<UNKNOWN SSA VALUE>>, <<UNKNOWN SSA VALUE>>) : (i64, i64) -> i64
Op detached: %1 = "arith.extsi"(%arg1) : (i8) -> i64
Op destroyed: %0 = "arith.extsi"(<<UNKNOWN SSA VALUE>>) : (i8) -> i64
Op detached: %0 = "arith.extsi"(%arg0) : (i8) -> i64
Op destroyed: %0 = "arith.extsi"(<<UNKNOWN SSA VALUE>>) : (i8) -> i64
Op destroyed: "builtin.module"() (1 elided regions...) : () -> ()
Op detached: "func.func"() (1 elided regions...) {function_type = (i8, i8) -> i64, sym_name = "andOfExtSI"} : () -> ()
Op destroyed: "func.func"() (1 elided regions...) {function_type = (i8, i8) -> i64, sym_name = "andOfExtSI"} : () -> ()
Op detached: "func.return"(<<NULL VALUE>>) : (<<NULL TYPE>>) -> ()
Op destroyed: "func.return"(<<NULL VALUE>>) : (<<NULL TYPE>>) -> ()
Op detached: %0 = "arith.extsi"(<<NULL VALUE>>) : (<<NULL TYPE>>) -> i64
Op destroyed: %0 = "arith.extsi"(<<NULL VALUE>>) : (<<NULL TYPE>>) -> i64
Op detached: %0 = "arith.andi"(<<NULL VALUE>>, <<NULL VALUE>>) : (<<NULL TYPE>>, <<NULL TYPE>>) -> i8
Op destroyed: %0 = "arith.andi"(<<NULL VALUE>>, <<NULL VALUE>>) : (<<NULL TYPE>>, <<NULL TYPE>>) -> i8
===== End IRListener Trace =====

When we run some passes after -test-ir-listeners installed the listener, we collect IR modification events in a trace, and when the module is destroyed the listener prints a trace of the modifications that were recorded.

Implementation

This will cost a new pointer-size member on each operation to be able to attach listener. When no listener is set, we will also pay the cost a branch for checking this member on every mutation.
When one or multiple listeners are attached to an operation, we pay the price of iterating a vector and one virtual dispatch per listener.

A listener instance is refcounted, it’ll be kept alive for as long at there is an operation referring to it. This is what allows the -test-ir-listeners pass to install a listener and have it outlive the IR.

To be able to track mutation on Operation that didn’t exist at the time the listener was set, we added some custom logic so that when an Operation is inserted in a block, it’ll inherit the listeners from the parentOp (if any).
This is what enables the listener to track mutation to operations created during canonicalization in the code above.
In the extreme, if you start from an empty ModuleOp and install a listener on it, every Operation added somewhere nested in this module will inherit this listener: we will be able to track every single IR mutation and trace it.

7 Likes

Could the listener add support to trace the replaceUsers operation? In a pattern-rewrite transformation, the replaceUsers operations are very common, if the listener could trace these operations, it will be very convenient to track the transformation process.

Let’s talk about this some time before you go further. This is directly related to the llvm/IR/ValueHandle.h stuff that we built in LLVM IR and that turns out to be a really bad thing IMO. I added it originally for alias analyses, with the goal of making it so the analysis would automatically stay up to date as transformations mutated the IR.

There are two problems with this:

  1. It bloats the IR with the extra pointer as you mention. This can be reduced to a single bit + on the side hash table though.

  2. The bigger issue is that this is effectively impossible to use. You don’t get context to update the analyses, and the granularity of updates is tiny. Building anything on top of this sort of thing is really slow because of this, and impossible to make “actually general” because there are transformations that do all sorts of things. Reflecting them would require tons of high level transformation methods to reflect them into the hooks.

In LLVM, the ValueHandle stuff was originally adopted very widely and then backed out because of these reasons. It is now effectively only used for very local and simple things that know how the transformations are being applied. At that point though, the overhead of cost and complexity defeats the purpose of using something like this: if you have tight coupling between the code doing the IR transforms and the things that need updating, you might as well directly update them.

7 Likes

I just want to add that GlobalIsel uses GISelChangeObserver for the same idea.

Think this could be interesting for debugging passes as it allows to hook in at a low granularity and observe what happens.

This sounds interesting, but might have to fine of a granularity as Chris mentions above. In some of the cases for rewriter listeners that I have, it is interesting to see even higher level mutations like “replace an op with another op” (rather than a list of values) or “combine these two ops into one”. That being said, I would really love this to be implemented as a sort of sanitizer to intercept all direct IR mutations bypassing the rewriter when it is supposed to be used.

This was also on my mind: a mechanism for adding safety features. I haven’t fully thought this through, but if using more for those kind of use cases, I might have considered a global context-level IR mutation listener instead of fine grained registration per op. Then you could gate all of the logic and overhead on one branch on a context-owned bool.

I’m also not sure that the granularity of the registration matches use cases I have seen.

It seems like this feature might be useful for a “better” MLIR diff experience. (Maybe that was the goal?)

Could be useful for diffing, but I think it’s not needed (you could just give each op a unique location and achieve ~the same thing, I think?).

As one more point in this problem space, there’s the CFG update interface (llvm/IR/CFGUpdate.h / llvm::cfg::Update) which can be used to update (post-)dominator trees in bulk. A similar bulk update mechanism would be interesting for mutating IR, especially from a safety perspective.

1 Like

Kind of, but it is a bit of a clutch: you rely on location propagation (which is imperfect) and there is also the fundamental issue of many-to-many rewrites which can make it hard to trace things back.
It kind of work for many debugging use-cases though (where the imperfection does not prevent to get far enough, especially in combination with other techniques like -debug traces).

Yeah I have some familiarity with ValueHandle and the limits of it, but it’s not clear to me that it is as bad your make it sound, in particular I’m not aware of a viable alternative either! (including in LLVM where it is still used pervasively, see below).

In particular, I don’t think there is so much “coupling” (conceptually) between analyses and transformations, at least I don’t believe we ought to increase the coupling. I’d be concerned if we’d have to teach every transformations about every things that needs to be updated, and more than just “teaching them about it” you need to thread state through everything.

This is why we tried to implement the “listeners” on the Rewriter for example I believe: one way to decouple transformations from other components which needs to track the state of the IR or some mutations at least.

The listeners are convenient, but they are very limited since they rely on the context of the rewriter. It is currently impossible to have any analysis caching some piece of IR that wouldn’t end up with dangling pointers or invalid state without something equivalent to ValueHandle.

Let me try below to contextualize ValueHandle for everyone else here who may not be familiar with what exists in LLVM (people familiar with ValueHandle can probably skip this block).

From what I can tell, ValueHandle is mainly organized around two callbacks that the Value class invokes directly in the relevant situations:

  static void ValueIsDeleted(Value *V);
  static void ValueIsRAUWd(Value *Old, Value *New);

The Value class has a flag indicating if a handle is set, and these callbacks are relying on a table in the LLVMContext to map back a Value to a ValueHandle:

  /// ValueHandles - This map keeps track of all of the value handles that are
  /// watching a Value*.  The Value::HasValueHandle bit is used to know
  /// whether or not a value has an entry in this map.
  using ValueHandlesTy = DenseMap<Value *, ValueHandleBase *>;
  ValueHandlesTy ValueHandles;

There are four kinds of ValueHandle:

  enum HandleBaseKind { Assert, Callback, Weak, WeakTracking };

And the class documentation for each of these:

/// Value handle that asserts if the Value is deleted.
///
/// This is a Value Handle that points to a value and asserts out if the value
/// is destroyed while the handle is still live.  This is very useful for
/// catching dangling pointer bugs and other things which can be non-obvious.
/// One particularly useful place to use this is as the Key of a map.  Dangling
/// pointer bugs often lead to really subtle bugs that only occur if another
/// object happens to get allocated to the same address as the old one.  Using
/// an AssertingVH ensures that an assert is triggered as soon as the bad
/// delete occurs.
template <typename ValueTy>
class AssertingVH

(I’ll skip PoisoningVH which is similar to AssertingVH somehow)

/// A nullable Value handle that is nullable.
///
/// This is a value handle that points to a value, and nulls itself
/// out if that value is deleted.
class WeakVH : public ValueHandleBase {
/// Value handle that is nullable, but tries to track the Value.
///
/// This is a value handle that tries hard to point to a Value, even across
/// RAUW operations, but will null itself out if the value is destroyed.  this
/// is useful for advisory sorts of information, but should not be used as the
/// key of a map (since the map would have to rearrange itself when the pointer
/// changes).
class WeakTrackingVH : public ValueHandleBase {
/// Value handle that tracks a Value across RAUW.
///
/// TrackingVH is designed for situations where a client needs to hold a handle
/// to a Value (or subclass) across some operations which may move that value,
/// but should never destroy it or replace it with some unacceptable type.
template <typename ValueTy> class TrackingVH {
/// Value handle with callbacks on RAUW and destruction.
///
/// This is a value handle that allows subclasses to define callbacks that run
/// when the underlying Value has RAUW called on it or is destroyed.  This
/// class can be used as the key of a map, as long as the user takes it out of
/// the map before calling setValPtr() (since the map has to rearrange itself
/// when the pointer changes).  Unlike ValueHandleBase, this class has a vtable.
class CallbackVH : public ValueHandleBase {

AssertingVH/PoisoningVH are really meant to catch bugs, they are pervasively used to ensure that LLVM analysis and intermediate data-structures are correctly handled and maintained up-to-date by transformations on the IR.

Here are almost all the places using it:

llvm/include/llvm/Analysis/AliasSetTracker.h
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
llvm/include/llvm/Analysis/MemoryDependenceAnalysis.h
llvm/include/llvm/Analysis/MemorySSAUpdater.h
llvm/include/llvm/Analysis/ScalarEvolution.h
llvm/include/llvm/Transforms/Scalar/GVN.h
llvm/include/llvm/Transforms/Scalar/JumpThreading.h
llvm/include/llvm/Transforms/Scalar/Reassociate.h
llvm/include/llvm/Transforms/Utils/BypassSlowDivision.h
llvm/include/llvm/Transforms/Utils/PredicateInfo.h
llvm/include/llvm/Transforms/Utils/ScalarEvolutionExpander.h
llvm/lib/Analysis/LazyValueInfo.cpp
llvm/lib/Analysis/MemorySSAUpdater.cpp
llvm/lib/CodeGen/CodeGenPrepare.cpp
llvm/lib/Target/PowerPC/PPCLoopInstrFormPrep.cpp
llvm/lib/Transforms/IPO/MergeFunctions.cpp
llvm/lib/Transforms/Scalar/DivRemPairs.cpp
llvm/lib/Transforms/Scalar/IndVarSimplify.cpp
llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp
llvm/lib/Transforms/Scalar/RewriteStatepointsForGC.cpp
llvm/lib/Transforms/Utils/BasicBlockUtils.cpp
llvm/lib/Transforms/Utils/Local.cpp
llvm/lib/Transforms/Utils/SimplifyIndVar.cpp

Now WeakTrackingVH/TrackingVH is not in the same category, instead of just delegating keeping the consistency of the analysis results and other intermediate data structure to the actual transformation, it’ll be used to actually invalidate and/or update these data-structures.

llvm/include/llvm/Analysis/CGSCCPassManager.h
llvm/include/llvm/Analysis/CallGraph.h
llvm/include/llvm/Analysis/IVDescriptors.h
llvm/include/llvm/Analysis/IVUsers.h
llvm/include/llvm/Analysis/LoopAccessAnalysis.h
llvm/include/llvm/Analysis/MemoryBuiltins.h
llvm/include/llvm/Analysis/MemorySSAUpdater.h
llvm/include/llvm/Analysis/ObjCARCAnalysisUtils.h
llvm/include/llvm/Transforms/Scalar/NaryReassociate.h
llvm/include/llvm/Transforms/Utils/Cloning.h
llvm/include/llvm/Transforms/Utils/Local.h
llvm/include/llvm/Transforms/Utils/LoopUtils.h
llvm/include/llvm/Transforms/Utils/ScalarEvolutionExpander.h
llvm/include/llvm/Transforms/Utils/SimplifyIndVar.h
llvm/include/llvm/Transforms/Utils/ValueMapper.h
llvm/include/llvm/Transforms/Vectorize/SLPVectorizer.h
llvm/lib/Analysis/CGSCCPassManager.cpp
llvm/lib/Analysis/CallGraphSCCPass.cpp
llvm/lib/Analysis/MemorySSAUpdater.cpp
llvm/lib/CodeGen/CodeGenPrepare.cpp
llvm/lib/Target/AArch64/AArch64StackTagging.cpp
llvm/lib/Target/BPF/BPFAbstractMemberAccess.cpp
llvm/lib/Target/RISCV/RISCVGatherScatterLowering.cpp
llvm/lib/Target/XCore/XCoreLowerThreadLocal.cpp
llvm/lib/Transforms/Coroutines/CoroSplit.cpp
llvm/lib/Transforms/IPO/Attributor.cpp
llvm/lib/Transforms/IPO/GlobalOpt.cpp
llvm/lib/Transforms/IPO/MergeFunctions.cpp
llvm/lib/Transforms/InstCombine/InstructionCombining.cpp
llvm/lib/Transforms/ObjCARC/ProvenanceAnalysis.h
llvm/lib/Transforms/Scalar/DivRemPairs.cpp
llvm/lib/Transforms/Scalar/IndVarSimplify.cpp
llvm/lib/Transforms/Scalar/InferAddressSpaces.cpp
llvm/lib/Transforms/Scalar/InstSimplifyPass.cpp
llvm/lib/Transforms/Scalar/LoopFlatten.cpp
llvm/lib/Transforms/Scalar/LoopIdiomRecognize.cpp
llvm/lib/Transforms/Scalar/LoopInstSimplify.cpp
llvm/lib/Transforms/Scalar/LoopSimplifyCFG.cpp
llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp
llvm/lib/Transforms/Scalar/LowerConstantIntrinsics.cpp
llvm/lib/Transforms/Scalar/NaryReassociate.cpp
llvm/lib/Transforms/Scalar/Reassociate.cpp
llvm/lib/Transforms/Scalar/Scalarizer.cpp
llvm/lib/Transforms/Utils/BasicBlockUtils.cpp
llvm/lib/Transforms/Utils/CloneFunction.cpp
llvm/lib/Transforms/Utils/InlineFunction.cpp
llvm/lib/Transforms/Utils/Local.cpp
llvm/lib/Transforms/Utils/LoopUnroll.cpp
llvm/lib/Transforms/Utils/LoopUnrollRuntime.cpp
llvm/lib/Transforms/Utils/LoopUtils.cpp
llvm/lib/Transforms/Utils/ScalarEvolutionExpander.cpp
llvm/lib/Transforms/Utils/SimplifyIndVar.cpp
llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp

For example, InnerLoopVectorizer::fixReduction() capture the start value of a “recurrence”:

  TrackingVH<Value> ReductionStartValue = RdxDesc.getRecurrenceStartValue();

It is then used a couple of times later in the function, but in the meantime multiple utilities that will transform the code were used. If any of them ended up RAUW this value the handle would keep being up-to-date.

Here is a small function using these “Tracking” ValueHandle for example: llvm-project/NaryReassociate.cpp at main · llvm/llvm-project · GitHub

Finally the CallbackVH is probably the most sophisticated, it’s used in the following places:

llvm/include/llvm/Analysis/AssumptionCache.h
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
llvm/include/llvm/Analysis/BranchProbabilityInfo.h
llvm/include/llvm/Analysis/DomTreeUpdater.h
llvm/include/llvm/Analysis/GlobalsModRef.h
llvm/include/llvm/Analysis/IVUsers.h
llvm/include/llvm/Analysis/PhiValues.h
llvm/include/llvm/Analysis/ScalarEvolution.h
llvm/include/llvm/Analysis/ScalarEvolutionExpressions.h
llvm/include/llvm/Passes/StandardInstrumentations.h
llvm/lib/Analysis/LazyValueInfo.cpp

Here is for example how the “AssumptionCache” updates itself when a Value that has some known “assumptions” is RAUW with a new one: llvm-project/AssumptionCache.cpp at main · llvm/llvm-project · GitHub
Or here https://github.com/inkryp/llvm-project/blob/main/llvm/lib/Analysis/ScalarEvolution.cpp#L13357-L13384 to see an example of cache invalidation: SCEV clears its analysis from knowledge associated with the users of the current value being RAUW with a new one, so that the analysis gets recomputed.

ValueHandle has been pushed as the best solution in LLVM to better replace other ad-hoc solutions, for example (2015): [LLVMdev] AliasAnalysis update interface - a tale of sorrow and woe

Limitation is that the number of callbacks does not allows to capture some modifications in the context of the Value captured by the handle, for example: [llvm-dev] stale info in the assumption cache where a block being outlined in a new function does not trigger a ValueHandle update in the AssumptionCache which is now incorrect.

1 Like

For the record, I have no strong opinion on adding or not listeners. It won’t impact my work negatively in any way I can see, and it may even be helpful in a number of cases we’re looking between analyses and transforms.

My replies below are merely to expand the discussion and hash out the issues, not to bring discord.

Listeners are in a sweat spot where the coupling isn’t mandatory, but if you have it, it’s useful, like metadata in IR.

But it also has the same problems as metadata:

  • you need to add the right stuff, keep it up-to-date with the rest of the infra
  • consumers need to know what to do with them, which may not be like the producer intended

And with that, it’s all too common that some producers will be more diligent than others, some will upgrade when the infra changes, others not, some consumers will do “hacky stuff” and potentially destroy information without intent, etc.

That is not to say that it is bad or useless. I think it’s useful, but I fear adding too many loosely coupled generic frameworks would lead to them being misused and abandoned, or worse, overused and impossible to get rid of.

I don’t read that email in the same light. Nowhere is Chandlers stating anything as “the best solution”, just as the existing solution that does the same thing as this add-hoc infrastructure.

To me, Chandler’s example is what already happens with ValueHandle, and IR metadata, and similar innocuous infrastructure that gets misused. Adding yet another tracking device across passes, even if this is in MLIR instead of LLVM, seems to me like someone will write a similar email in a few years.

I guess the devil’s advocate question is: why can’t we extend ValueHandle’s infrastructure to deal with both LLVM and MLIR?

I am actually not sure what you mean here, I struggle to connect it to the proposed mechanism concretely.
For example what does it mean that “some producers will be more diligent than others”? Are you talking about my proposal or referring to the “Rewriter” instead?
The way I implemented it here makes it that there is no notion of “producer” (external to the core of MLIR) and nothing anyone writing transformations should know or adjust here: it’s fully transparent.

I was referring to:

But there is (IMO) a much better way to accomplish the same task: use a
ValueHandle to trigger the update on deletion. This will prevent widespread
failure to use the deleteValue API to update alias analysis.

The last sentence makes it pretty clear that this is a superior solution. The notion of being “the best” is rather me concluding this in the absence of other solution. There may be hypothetical other solutions, but I’m unaware of them. If someone has experience from other compiler frameworks about how to handle this, I’d love to brainstorm more!

What I am doing here is not much different than ValueHandle conceptually as far as I can tell? Of course we can look whether we use a side-map in the context instead of a pointer on the operation, but that’s an implementation detail we can sort out after figuring out the conceptual “feature” the framework should provide.

If you’re talking about code-reuse, we could look into it but I’m not sure the component is complex enough to warrant reuse across LLVM IR and MLIR: this kind of things always looks appealing on the surface but not very practical when we dig in (we looked into whether we should refactor the LLVM “new” pass manager at the time for example instead of building one for MLIR).

IMO: Chandler emails shows what happen when dedicated/coupled APIs are used instead of having the framework handle it.
The “not using ValueHandle” was the problem reported, I just read this as ValueHandle being the more principled approach to the problem (in particular no coupling to transformations).

I would expect listener not attached to any operation to be useful for “automated” analysis updates.
for example every analysis could provide there updater/listener. Transformation passes could register listeners for the analysis they use in the IRRewriter used to edit the IR.
This would allow sharing analysis update code across Transformation passes. which is barely done in LLVM.

But what is the expected used case for attaching a listener to the operations ?

also it is possible to edit IR without going through a rewriter (which I dislike). is this intentional or should we get rid of that ?

Well this is kind of a major part of issue: the Rewriter is not the main way of editing IR, instead Rewriter is a temporary object that is right now setup mostly by a pattern driver and passed in the pattern application logic.

That may have been a tangent I followed…

I’m talking about the usage of your proposal, not the proposal itself. Your proposal is self-contained and just a way to register listeners, nothing mandates what people do with those. That’s fine.

But usually, the more generic the infrastructure, them more it leads to unintended uses. If people use to print stuff, that’s simple. If they use it to keep semantics across different passes (which is what I would use for), then it gets more complicated.

If those frameworks (built on top of listeners) become official in MLIR, for example to have a more resistant form of metadata, then what the listener does when encountering change will directly impact the quality of such metadata. This is what I meant by being “diligent”.

Metadata is optional and passes can drop them precisely to avoid the necessity of everyone being diligent about everyone else’s random information, and listeners don’t change that requirement. So my question is: for this particular use-case, why would it be better than metadata?

But that was a massive tangent, so it got lost. It may not be an important use-case, or even a potential one, so ignore if it’s not relevant.

I agree, but it still would be good to dig a bit deeper, like you did with the pass manager, to make sure there is consensus that this is the best way. I don’t really like that the MLIR pass manager does very similar things as the LLVM one, but it’s a whole new implementation. The more of those things we have in MLIR, the more we’ll have to duplicate implementation across projects in the future.

Got it I think: you used “producer” because these listeners are “producing” updated “metadata” right? (You were a step further than me: for the listener at on the “receiving” end of an IR modification).

Correct! :slight_smile:

Related work in LLVM: [RFC] Lightweight LLVM IR Checkpointing