We have a policy for passes in general which states It should not be possible to trigger a crash or assertion by running an MLIR pass on verifier-valid IR. If it is possible, then the pass has a bug.
However there may be some ambiguity about the test passes that we have in tree and whether they should be hardened the same way.
I’m in the opinion that they should: they expose the entry points to the projects and expose an interesting surface to test. If they may be crashing legitimately then:
we can’t fuzz-test the compiler (at least not with these passes)
we can’t decide what is or isn’t a bug easily: a downstream user producing a minimal crash repro by using a test pass to demonstrate the problem isn’t clear if it is a bug or not.
these passes serve also as “examples” and not checking invariants and assumptions explicitly does not provide a good example to users. Writing pass “safely” requires following some idioms of programmation that is likely better to do uniformly in the project.
The counter point I can see for this (there may be more?) is the cost of maintenance and development: sanitizing inputs and assumptions in the test pass requires “effort”, and we may spend some time to fix issues beyond the intended scope of these passes. At the moment the benefits seem to me to outweigh the cost drastically, but I figured I’d poll everyone here.
As someone that is guilty of writing a pass without checks, I would agree with the choice of ensuring assertions don’t happen for the reasons that you listed.
I think I commonly see existing passes being relied upon too much, so adding additional error messages might cut back on this also (for example, lowering to LLVM has constantly caused confusion as people want to use the existing passes rather than collecting patterns into their own passes). I think the effort is worth it just to avoid explaining multiple times over the years, as different people make the same assumption, that they are not using the pass as intended.
I think it makes sense to ensure passes don’t hit assertions but error out gracefully. However, I would not go out of my way to do good diagnostics for them. Emitting “test pass applied to IR that it doesn’t expect” can be fine. In the same spirit, a lot of boilerplate and maintenance would go into testing such behavior of passes that are already tests. For example, if a pass convert-A-to-B used to assert on seeing an op from C, do we really want a test that ensures it is not the case?
These are conditionally enabled to exercise code otherwise not used in other flows. If something is not tested, it typically does not work or stops working at some point . If you compare this to LLVM, MLIR doesn’t have a ‘canonical’ pass pipeline like clang/opt that we could expect to be well covered. AFAIK, users will typically create their own pass pipelines, including custom passes made of up of populate*Patterns, etc., from MLIR, instead of invoking opt directly with a set of passes to run.
Another thing that test passes enable is developer-centric options/hooks. For example we can conditionally apply some transforms on functions that start with a given prefix.