There is a (relatively) recent set of patches aimed to introduce commutative matching in PDL. I have some concerns about the implementation, especially the ability of the bytecode to explode with the number of permutations, but I’m more worried about introducing “excessively powerful” bytecode instructions to implement higher-level constraints.
For example, the existence of a for_each
loop, introduced by @sfuniak in the multi-root pattern implementation, I’m concerned sets a precedent. I’m not against a for_each
loop itself, but I didn’t like that it was used to “traverse” down the IR by iterating over all users. Downward traversals in pattern matching is very unnatural. I didn’t have a strong opinion at the time because I didn’t have an alternative solution handy for implementing multi-root patterns. But recently I’ve wondered whether all multi-root patterns can be implemented as a sequence of single-root patterns. Let’s say I want to rewrite:
%a = "test.a" : () -> i32
%b = "test.b"(%a) : (i32) -> i32
%c = "test.c"(%a) : (i32) -> i32
"test.sink"(%b, %c) : i32, i32
To:
%foo = "test.foo"() -> i32
test.sink(%foo, %foo) : i32,i32
I can use a multi-root pattern
pdl.pattern {
%type
%a = operation "test.a" -> (%type: !pdl.type)
%a_0 = result 0 of %a
%b = operation "test.b"(%a: !pdl.value) -> (%type: !pdl.type)
%c = operation "test.c"(%a: !pdl.value) -> (%type: !pdl.type)
rewrite {
%foo = operation "test.foo" -> (%type: !pdl.type)
replace %b with %foo
replace %c with %foo
erase %a
}
}
Or a sequence of single-root patterns (with assistance from DialectConversion):
pdl.pattern @one {
%type = type
%a = operation "test.a" -> (%type: !pdl.type)
%a_0 = result 0 of %a
%b = operation "test.b"(%a: !pdl.value) -> (%type: !pdl.type)
rewrite {
%placeholder = operation "test.placeholder" -> (%type: !pdl.type)
replace %a with %placeholder
replace %b with %placeholder
}
}
pdl.pattern @two {
%type = type
%placeholder = operation "test.placeholder" -> (%type: !pdl.type)
%a_0 = result 0 of %placeholder
%c = operation "test.c"(%a: !pdl.value) -> (%type: !pdl.type)
rewrite {
%foo = operation "test.foo" -> (%type: !pdl.type)
replace %c with %placeholder
replace %placeholder with %foo
}
}
Where I’ve marked test.placeholder
as illegal. On a side note, what are the chances that the DialectConversion rollback mechanism can be exposed as a utility for writing more complex drivers? @River707 (Also there are some bugs with using PDL together with DialectConversion)
By the same token, the permutation in implementing commutative patterns can be achieved by permuting all the n!
patterns instead (with the same rewrite function). And I wonder whether multi-root patterns in general and commutative patterns could be implemented as PDL to PDL rewrites.
I think the roles of PDLInterp and the bytecode have been misunderstood a little. PDL was from the start an attempt to use compiler techniques to implement an efficient (and dynamic) pattern rewrite system. PDLToPDLInterp is a kind of specialized CSE for match constraints. Part of the misunderstanding I’m inclined to believe is my fault. There is a hidden abstraction layer between PDL and PDLInterp (the position/predicate nodes), and because it’s not represented as a dialect, it’s been overlooked.
PDL was designed with performance primacy; consequently, the solution to adding new, powerful features to PDL is not to add new, powerful bytecode ops but to think about what kinds of transformations we can apply to PDL/PDLInterp to make it happen and what new fundamental bytecode operations we might need. Do we need a for_each
op? Do we need function calls (e.g. PDLInterp outlining)? Can we implement a pattern scheduling language on top of PDL (e.g. allowing rewrites to “call” patterns with rollback)?
For example, the approach to implementing branches in PDL patterns would be to flatten them and let PDLToPDLIntern CSE the constraints later.
PDLL has a growing wishlist of features that don’t exist natively in PDL, so I think this something important to think about.