RFC: Pass Manager Redux

Greetings folks!

In working on a new optimization pass (splitting cold regions into separate functions based on branch probabilities) I’ve run into some limitations of the current pass manager infrastructure. After chatting about this with Nick, it seems that there are some pretty systematic weaknesses of the current design and implementation (but not with the fundamental concepts or behavior).

Current issues:

  • Arbitrary limitations on how passes can depend on an analysis: module passes have a gross hack to depend on function pass analyses, and CGSCC passes are just out of luck.

  • Poor caching of analysis runs across pass manager boundaries. Consider the N iterations on the SCC and the function passes run by the CGSCC pass manager. Even if every CG and function pass in the manager preserves a function pass analysis X, that pass will be re-run on each iteration of the SCC because it is scheduled in the function pass manager, not the CG pass manager. If we solve the previous item, that will make this one a serious problem suddenly.

  • The structure of the pass management tree is very non-obvious from code. The pass manager builder simply adds a linear sequence of passes that happens to build the appropriate stacks of passes at the appropriate times.

  • Currently the interfaces and implementation of passes and pass managers is overly complex: multiple inheritance, lots of virtual dispatch etc. Doesn’t use the canonical LLVM ‘isa’ and ‘dyn_cast’ techniques.

I’d like to fix these issues and build a new pass manager implementation designed around the following core concepts:

  • We should have clear tracking and statistics for pass run count, analysis invalidation, etc.

  • Analysis scheduling and re-use is fundamentally a caching and dependency problem, and we should structure it as such.

  • Non-preserving passes should invalidate the cache

  • The cache should be capable of spanning any particular pass management boundary when needed.

  • We should be able to trade memory for speed and cache more analyses when beneficial.

  • The infrastructure should at least support a lazier approach to analyses, so that we can do more to avoid computing them at all.

  • PassManagerBuilder should use an explicit nested syntax for building up the structure of the passes so it is clear when a pass is part of a CGSCC pass manager, or when it is a normal function pass.

  • Clear hierarchy of “Pass” interfaces. Every pass should be capable of acting-as-if it is a higher level pass. That is, a function pass should be capable of acting-as-if it is a module pass just by running over all functions in the module. That doesn’t mean this should regularly be used, but it makes conceptual reasoning about passes and testing of passes much more clear.

  • PassManagers should be passes, and serve as pass-aggregation strategies and analysis caching strategies. Where a function pass can act as a module pass, you usually instead want a function pass manager, which will collect a sequence of function passes and run all of them over each function all at once. This aggregation strategy increases locality. Similarly, a CGSCC pass manager aggregates the pass runs over an SCC of the call graph. Each pass manager could influence the caching strategy as well, for example the CGSCC pass manager might cache a function analysis pass over an entire SCC, rather than just over one function.

  • Single, LLVM-style inheritance model for the whole thing.

  • Users should be able to add new pass managers, and compose them cleanly with the LLVM-provided pass managers. Currently, many implementation details are burried that makes this hard.

  • What else did I miss?

Things that are totally out of scope: pass registration, the current pass order and structure, the fundamental interface of mapping from a pass to an analysis, etc. This is really about pass management and scheduling.

If folks generally like where this is going, I’ll get to work writing up initial code. The first thing I would do is provide an example collection of interfaces for the passes and pass managers to make sure we’re all on the same page. By then I should have a decent idea about the best strategy for cutting this into the actual codebase.

Hi Chandler, this seems sound to me. For example, consider running function
passes. Currently it works like this: if you schedule two function passes in
succession, FP1 and FP2, then for each function F, FP1 is run on F then FP2 is
run on F.

In your new scheme, if you schedule FP1 followed by FP2, then each will act as
a module pass and thus: for each function F, FP1 is run on F. Once this is
done, then for each function F, FP2 is run on F.

Two get the previous scheduling, you would create a function pass manager FPM,
which would itself be a function pass, and add FP1 and FP2 to FPM. When FPM
is run on a function, what it would do is: run FP1 on the function, then run
FP2 on the function. Scheduling FPM, which would act as a module pass, would
thus do: for each function F, run FPM on it. I.e. this would do: for each
function F, run FP1 on F then run FP2 on F. In short, you can easily implement
the current behaviour in a simple, natural and explicit way.

Ciao, Duncan.

Duncan Sands <baldrick@free.fr> writes:

Two get the previous scheduling, you would create a function pass manager FPM,
which would itself be a function pass, and add FP1 and FP2 to FPM. When FPM
is run on a function, what it would do is: run FP1 on the function, then run
FP2 on the function. Scheduling FPM, which would act as a module pass, would
thus do: for each function F, run FPM on it. I.e. this would do: for each
function F, run FP1 on F then run FP2 on F. In short, you can easily implement
the current behaviour in a simple, natural and explicit way.

I like this very much. The current system is overly complex and often
results in puzzling runtime errors if the module/function pass
separation is not respected.

One thing I have wanted very much is to run some function-level passes,
then some module-level passes and then some more function-level passes.
It sounds like this new scheme will make that easier.

It's a pity this won't cover pass registration because that is also
overly complex, IMHO, both for the initial coding (specifying passes in
multiple places) and for debugging (figuring out why a pass didn't run).
It's all a bit too C++-magical for me.

But one thing at a time. :slight_smile:

                              -Dave

Analysis groups. There are two limitations on analysis groups: 1) For some reason, setting one up and adding passes to it doesn’t seem to have worked quite as smoothly as it should. I don’t remember the exact details, but in the past, I’ve had to struggle to create Analysis Groups and make them work properly. 2) Rerunning analysis group passes after invalidation. If you want to use an analysis other than the default analysis group, you have to manually specify where to use it each time in the pass pipeline. If your non-default pass gets invalidated, then the default is used by any other transform that requests an analysis from that group. – John T.

John Criswell <criswell@illinois.edu> writes:

2) Rerunning analysis group passes after invalidation. If you want to
use an analysis other than the default analysis group, you have to
manually specify where to use it each time in the pass pipeline. If
your non-default pass gets invalidated, then the default is used by
any other transform that requests an analysis from that group.

Oh yes, I've run into this as well. I did a local hack here to add a
parameter to specify which pass should be treated as default.

                              -Dave

This is excellent Chandler.

We spoke before about wanting to keep running things like InstCombine and SimplifyCFG in a loop until stabilization. This will make doing that much easier.

I think some of the idea for this work might have come from a patch i tried to submit a few weeks ago on reducing recomputing the dom tree, in which you commented that we need to fix the pass manager. In that email i mentioned that not all passes respect the return value from runOnFunction. Is your pass framework going to explicitly use this return value? If so we'll need everyone to help make sure passes correctly return when they change anything.

Pete

This is excellent Chandler.

We spoke before about wanting to keep running things like InstCombine and
SimplifyCFG in a loop until stabilization. This will make doing that much
easier.

I think some of the idea for this work might have come from a patch i
tried to submit a few weeks ago on reducing recomputing the dom tree, in
which you commented that we need to fix the pass manager.

Well, my interest was renewed sharply when I wrote a significant
optimization pass that simply cannot run in the existing framework. =/

In that email i mentioned that not all passes respect the return value
from runOnFunction. Is your pass framework going to explicitly use this
return value? If so we'll need everyone to help make sure passes correctly
return when they change anything.

I *really* want to make this happen, but I think it needs to happen as a
phase-2. For now, I think we have to replicate the annoying current
behavior.

Hi Chandler,

I understand why the pass registration and the pass order/structure is out of
scope for your work. However, as others have already noted, there are problems
that are somewhat related to how pass scheduling works. From my perspective,
the current way we specify the pass order/structure is too much spread all over
different places. Maybe this is the right time to discuss how this could be
improve?! This way your proposal may already incorporate some of the
requirements that might come up once the pass registration/order/structure is
changed. One idea, for instance, could be to extend tablegen to specify pass
pipelines, or maybe even better, allow the parsing of such specifications at
runtime. This would provide a clear representation of which passes are
executed, and a clear interface on how this can be modified/extended.

Another important point are passes in the backend. Currently we have some
severe limitations there (for instance, all codegen passes have to be function
passes). It would be great if the full flexibility of the pass scheduling
framework could be preserved even in the backend.

All the best,
Florian

  • What else did I miss?

Things that are totally out of scope: pass registration, the current pass order and structure, the fundamental interface of mapping from a pass to an analysis, etc. This is really about pass management and scheduling.

Hi Chandler,

I understand why the pass registration and the pass order/structure is out of
scope for your work. However, as others have already noted, there are problems
that are somewhat related to how pass scheduling works. From my perspective,
the current way we specify the pass order/structure is too much spread all over
different places.

Really? It seems fairly reasonable to me: the PassManagerBuilder has the canonical bits for a reasonable optimization pipeline combined with extension hooks that clients can use to inject custom passes into reasonable places in the pipeline.

I don’t see a huge problem here, and so I’d like to keep the scope narrow. If you do see problems, i would build on top of the hopefully cleaner version. =]

One idea, for instance, could be to extend tablegen to specify pass
pipelines,

I don’t think this is useful. We can define reasonable and clear natural C++ interfaces for this.

Another important point are passes in the backend. Currently we have some
severe limitations there (for instance, all codegen passes have to be function
passes). It would be great if the full flexibility of the pass scheduling
framework could be preserved even in the backend.

I agree that backend passes are important, but don’t want to go there for this effort. It’s just not in scope at all.

This sounds really great to me, and it is long overdue.

Here are a few additional feature requests to consider since you're revamping this area:

1. Performance: the current pass manager is far slower than it should be. Try running our current pass pipeline over a module with a ton of empty functions and see how bad it is.

2. It would be great to get conditionally invalidated analysis passes. For example, if you run something like "dominators, loop unswitch, dominators", and loop unswitch doesn't *actually* change anything, then the second run of dominators shouldn't do anything. In fact, we shouldn't have two instantiations of the dominator pass in the first place.

3. There are some problems around pass invalidation that make it difficult to implement stateful (i.e., not ImmutablePass) alias analysis algorithms. Since the patent on steensgaard's analysis is about to expire, it would be nice to be able to implement it and have all the optimizer get better. I forget the forget the details though. Dan, do you remember the issues?

-Chris

One thing that bugged me a lot was that -print-{before,after}-all doesn't actually print before/after all passes if you just add passes to a pass manager and run the pass manager yourself...

A slightly related implementation detail is that I find it useful to be able to register and configure passes without instantiating them by using the static ID only. I was never sure whether we were moving toward char& or void* IDs.

-Andy