DataFlowAnalysis.h questions

Hi all,

I’m working on implemented effectively SCCP for the FIRRTL dialect in CIRCT, the WIP code is here. I just learned about the new DataFlowAnalysis.h file: it looks really well done and I would love love love to adopt it instead of duplicating the skeleton of SCCP yet again.

A few questions:

  1. One of the challenges we have is the notion of “wires” and “ports” that have “connect semantics”. In LLVM IR terms, these are analogous to an alloca that isn’t allowed to escape - you know that you only have “loads” and “stores” to these objects. Is there a way to get DataFlowAnalysis to handle this sort of thing?

    The way IMConstProp handles it is through a bit of specialized logic when handling the connects (the “store” analogy): it sets a lattice value on one of its operands.

  2. Because our ports (analogy = function arguments and results) have connect semantics, we have to do special stuff to propagate values across instances (analogy = function calls) for interprocedural analysis. The existing MLIR call interfaces aren’t powerful enough to model what we need AFAIK, is there a recommended way of handling this?

  3. One of the future extensions I plan to add to IMConstProp is a field sensitive model for aggregate types (analogy = first class aggregates) where instead of storing DenseMap<Value, LatticeValue> the solver stores DenseMap<pair<Value, /*fieldId*/unsigned>, LatticeValue>. Is there a reasonable model for this in DataFlowAnalysis or would I have to put field sensitivity into the lattice value somehow?

Thanks for the help, it looks like a really nice interface!


If you don’t have this kind of counterfactual propagation problem I describe below, then you should be able to just getLatticeValue(op.getOperand()).join(...) on the operand. Due to monotonicity I don’t think there should be any issue (though of course, the pass is iterating forward, so to the extent that this propagates information backward the propagation might go quadratic for a pathological use case).

I have a “store” op that is similar to this in npcomp (numpy.overwrite_array).

The part that I couldn’t use DataFlowAnalysis.h for is a situation like this:

%tensor = ...
%array = ...
// safely abort the program if something is wrong
abort_if %guard_predicate
numpy.overwrite_array %tensor overwrites %array

numpy.overwrite_array has UB if %tensor and %array don’t already have the same shape and dtype, but I cannot simply set the shape lattice information I have for %tensor onto %array, as that could counterfactually propagate information to use(%array), ignoring the intervening abort_if. I don’t think this can be handled by a pure sparse dataflow engine like DataFlowAnalysis.h (without imbuing some sort of MemorySSA/“EffectSSA” structure onto the IR, or other abstraction to convert effects to use-def chains)

Actually it looks like the framework is depending on visitOperation’s ChangeResult return value, so changing arbitrary lattice values won’t work. To enable this, one possibility would be for the lattice value changes to be tracked with a single point of truth where the join happens (similar to how LiveMap internally updates the “changed” boolean for all the lattice values when the lattice values change).

Yeah, that’s the issue. It is getting the right operations added to the worklist. I suspect a minor modification could be done to make it work, I can explore that.

I’m more concerned about the field sensitivity aspect though, I think that’s a fairly significant change to the core data structure.


If we had a “product lattice” helper would that be enough? (make lattice for Value a product of N lattices, one for each field). Or is there a functionality/performance/legibility concern with doing that? (forgive my naivete – I have never written a field-sensitive analysis)

(Still coming out of post vaccine OOO, but some comments below)

tldr; I’m okay with changes to the visitation logic (it’s not final anyways IMO). The sub-element problem seems trickier given that we don’t have anything “native” for it, but open to various options.

Changing a bit of the structure of the visitation LGTM. The current structure (and a bit of the traversal) is still a bit too close to the original SCCP structure, and I just haven’t had the time yet to do a cleanup post the initial landing.

The problematic things I see about pushing this into the core structures (that get shared by all users), is that there currently is no general interface modeling for “aggregate types” or manipulating them. I would suspect that it would be easiest to just handle this via your own lattice, though I don’t know if you’ve tried that yet. I would definitely be open to adding “native” support for these kinds of types, but I don’t yet see the generalization point (I mean I see how it could generalize, but I’m slightly wary that when/if we do add interfaces for this kind of thing it’s going to be harder to replace the stuff we hacked in. That concern may be expelled based on what all actually needs to change, or if you already have something in mind). Ideally at some point we would just provide a proper type interface+op interface(for insertion/extraction?), and bake those interfaces into the main traversal logic (such that SCCP and friends can benefit, given that they can’t/won’t hard code support for things).

Do you mean changing the core lattice structure or just the lattice structure for Chris’ analysis? Having a side helper(i.e. not changing the core lattice element structure) that analyses can just plugin to their lattice seems like it could be a satisfying stop-gap until we had more “native” support for these types of things. Could also be an interesting way to test out what native support could look like.

– River

I meant a side helper that could represent “a lattice that consists of these N sublattices”. E.g. to model a product lattice from slide 17 here: