In conclusion, we could teach PassManagers how to retrieve constraints on
passes and which passes to run taking into account both:
- the information stored on Pass Constraints and
- the optimization level associated to single functions (if available);
I like this approach. Today, the way to know which passes are added is to
look at the functions and follow the branches for O1, O2, etc. Your
proposal is way cleaner and allows for a table-based approach. It also
makes it simpler to experiment with passes in different optimization levels
on randomized benchmarks.
I often tried to comment passes to identify bugs (that bugpoint wouldn't)
and realized that it could generate many segmentation faults in the
compiler, which is worrying...
3.1 How pass constraints can be used to select passes to run
It is the responsibility of the pass manager to check the effective
optimization level for all passes with a registered set of constraints.
There is a catch here. Passes generally have unwritten dependencies which
you cannot tell just by looking at the code. Things like "run DCE after
PassFoo only if state of variable Bar is Baz" can sometimes only be found
out by going back on the commits that introduced them and finding that they
were indeed, introduced together and it's not just an artefact of code
The table I refer above would have to have the dependencies (backwards and
forwards) with possible condition code (a virtual method) to define if it
has to pass or not, based on some context, in addition to which
optimization levels they should run. In theory, having that, would be just
a matter of listing all passes for O-N which nobody depends on and follow
all the dependencies to get the list of passes on the PassManager.
Removing a pass from the O3 level would have to remove all orphaned passes
that it would create, too. Just like Linux package management.
Pass Constraints should allow the definition of constraints on both
the optimization level and the size level.
Yes, AND to run, OR to not run.
In order to support per-function optimizations, we should modify the
existing SimpleInliner to allow adapting the Threshold dynamically based
on changes in the effective optimization level.
This is a can of worms. A few years back, when writing our front-end we
figured that since there weren't tests on inline thresholds of any other
value than the hard-coded one, anything outside a small range around the
hard-coded values would create codegen problems, segfaults, etc. It could
be much better now, but I doubt it's well tested yet.
As a future develelopment, we might allow setting the inlining threshold
using the optimize pragma.
This, again, would be good to write randomized tests. But before we have
some coverage, I wouldn't venture on doing that in real code.
Unfortunately changing how code generator passes are added to pass
managers require that we potentially make changes on target specific parts
Shouldn't be too hard, but you'll have to look closely if there is any
back-end that depends on optimization levels to define other specific
properties (cascading dependencies).
4. Proposed Implementation Workflow
I think your workflow makes sense, and I agree that this is a nice feature
(for many uses). Thanks for looking into this!