that gets eliminated by DCE in the canonicalizer since the load has only a “read” effect. I could just add a write effect with no target value to the op in presence of the volatile attribute, but this feels gross.
I’m not an expert on the semantics of volatile, but I know it is not always sufficient and not always the most effective solution to prevent optimizations such as DCE or CSE on specific expressions.
See https://dl.acm.org/doi/10.1145/3485519 with @Son_Tuan_Vu and colleagues.
It was always on my mind to design an opaque op along the lines of this paper, or to extend the concept with a nested region to opacify anything inside (actually what we proposed in an earlier formalization prior to the paper above). It would also help to design some kind of dummy I/O or side-effecting op. Anyone interested, please reach out. It is important for security applications, but more generally for all kinds of debugging, traceability, and a new concept of zero-overhead test harnesses that remain to be investigated.
From a technical standpoint, in the beginning what I had in mind was that we would eventually augment the effects to accept parameters/arguments/whatever. So if we had atomic/volatile, it would just be a modifier on the load/store/whatever.
The bigger question though w.r.t volatile was what our modeling for that would actually look like and how that meshes with what LLVM currently has done. See the points that @mehdi_amini brought up thread. It’s not clear if we “volatile” as a load/store modifier is actually what we want in the effect system for modeling. Whatever we end up with needs to span beyond just the LLVM world of behavior, given that memory effects are used up and down the stack.
That’s the gross fix I’m referring to. I don’t particularly like it for the general MLIR case because it’s misleading: a volatile operation doesn’t necessarily write to the address (the one in LLVM dialect may).
The question is more how should it be modeled with memory effects, not whether it should. MLIR is significantly more expressive than the mere notion of having a side effect.
An op that’s opaque to the memory effect modeling mechanism is just an op that doesn’t implement the MemoryEffectOpInterface. The analysis will assume that the op may have any possible effect. However, we can’t have this mechanism selectively, e.g., only specify the effects of the llvm.load operation in absence of the volatile attribute and let them unspecified otherwise. Maybe we need such a mechanism…
An effect instance can be associated with an Attribute that can refine it. This was added a while ago.
I’d be in favor of this approach and see many more use cases for it (eg. call op that may return memory effects if it has some attributes indicating them and otherwise conservatively has to be unspecified). One could implement it similarly to SymbolOpInterface where classof returns false dependent on an interface method, making the mechanism work with any current call sites as well.
Indeed, upstream only uses the side effect machinery to represent memory effects. IMO, volatile loads/stores still fall into the category of memory effects, but not necessarily into the existing non-parameterized alloc/store/load/free classification.