RFC: [GlobalISel] Towards a generic MI combiner framework

Hi everyone,

This RFC concerns the design and architecture of a generic machine instruction combiner/optimizer framework to be developed as part of the GISel pipeline. As we transition from correctness and reducing the fallback rate to SelectionDAG at -O0, we’re now starting to think about using GlobalISel with optimizations enabled. There are obviously many parts to this story as optimizations happen at various stages of the codegen pipeline. The focus of this RFC is the replacement of the equivalent of the DAGCombiner in SDAG land. Despite the focus on the DAGCombiner, since there aren’t perfect 1-1 mappings between SDAG and GlobalISel components, this may also include features that are currently implemented as part of the target lowerings, and tablegen isel patterns. As we’re starting from a blank slate, we have an opportunity here to think about what we might need from such a framework without the legacy cruft (although we still have the high performance bar to meet).

I want to poll the community about what future requirements we have for the GISel G_MI optimizer/combiner. The following are the general requirements we have so far:

  1. It should have at least equivalent, but hopefully better runtime/compile time trade off than the DAGCombiner.
  2. There needs to be flexibility in the design to allow targets to run subsets of the overall optimizer. For example, some targets may want to avoid trying to run certain types of optimizations like vector or FP combines if they’re either not applicable, or not worth the compile time.
  3. Have a reasonably concise way to write most optimizations. Hand written C++ will always be an option, but there’s value in having easy to read and reason about descriptions of transforms.

These requirements aren’t set in stone nor complete, but using them as a starting point: a single monolithic “Generic MI combiner” component doesn’t look like the right approach. Our current thinking is that, like we’ve done with the Legalizer, the specific mechanics of the actual optimization should be separated into it’s own unit. This would allow the combines to be re-used at different stages of the pipeline according to target needs. Using the current situation with instcombine as an example, there is no way to explicitly pick and choose a specific subset of IC, it’s only available as a whole pass with all the costs that entails.

The reasoning behind req 3 is that there may be compile time savings available if we can describe in a declarative style the combines we want to do, like it’s currently possible with tablegen patterns. This hasn’t been proven out yet, but consider an alternative where we use the machine instruction equivalent of the IR/PatternMatch tooling which allows easy and expressive matching of IR sub-trees. A concern I have with using that as the main approach to writing combines is that it’s easy to add new matchers in an routine which re-computes information that’s previously been computed in previous match() attempts. This form of back-tracking might be avoided if we can reason about a group of combines together automatically (or perhaps we could add caching capabilities to PatternMatch).

What would everyone else like to see from this?

Thanks,
Amara

I share this concern. The current DAGCombine, being constructed on top of SDAG, has a kind of built-in CSE and automatic DCE. How will things change, if they’ll change, in this new model? Thanks again, Hal

Hi Amara,

Hi everyone,

This RFC concerns the design and architecture of a generic machine instruction combiner/optimizer framework to be developed as part of the GISel pipeline. As we transition from correctness and reducing the fallback rate to SelectionDAG at -O0, we’re now starting to think about using GlobalISel with optimizations enabled. There are obviously many parts to this story as optimizations happen at various stages of the codegen pipeline. The focus of this RFC is the replacement of the equivalent of the DAGCombiner in SDAG land. Despite the focus on the DAGCombiner, since there aren’t perfect 1-1 mappings between SDAG and GlobalISel components, this may also include features that are currently implemented as part of the target lowerings, and tablegen isel patterns. As we’re starting from a blank slate, we have an opportunity here to think about what we might need from such a framework without the legacy cruft (although we still have the high performance bar to meet).

I want to poll the community about what future requirements we have for the GISel G_MI optimizer/combiner. The following are the general requirements we have so far:

  1. It should have at least equivalent, but hopefully better runtime/compile time trade off than the DAGCombiner.
  2. There needs to be flexibility in the design to allow targets to run subsets of the overall optimizer. For example, some targets may want to avoid trying to run certain types of optimizations like vector or FP combines if they’re either not applicable, or not worth the compile time.
  3. Have a reasonably concise way to write most optimizations. Hand written C++ will always be an option, but there’s value in having easy to read and reason about descriptions of transforms.

These requirements aren’t set in stone nor complete, but using them as a starting point: a single monolithic “Generic MI combiner” component doesn’t look like the right approach. Our current thinking is that, like we’ve done with the Legalizer, the specific mechanics of the actual optimization should be separated into it’s own unit. This would allow the combines to be re-used at different stages of the pipeline according to target needs. Using the current situation with instcombine as an example, there is no way to explicitly pick and choose a specific subset of IC, it’s only available as a whole pass with all the costs that entails.

I agree. If we’ve just replaced some MIR with different MIR then we should be able to ask the combiner to operate on the new MIR. Much like the legalizer, this would reduce the frequency with which we have to implement the same thing in multiple passes just because we can’t run the existing code at the time.

The reasoning behind req 3 is that there may be compile time savings available if we can describe in a declarative style the combines we want to do, like it’s currently possible with tablegen patterns. This hasn’t been proven out yet, but consider an alternative where we use the machine instruction equivalent of the IR/PatternMatch tooling which allows easy and expressive matching of IR sub-trees. A concern I have with using that as the main approach to writing combines is that it’s easy to add new matchers in an routine which re-computes information that’s previously been computed in previous match() attempts. This form of back-tracking might be avoided if we can reason about a group of combines together automatically (or perhaps we could add caching capabilities to PatternMatch).

My thinking on this is that (with a few exceptions that I’ll get to), combine and select are basically the same thing. You match some MIR, and replace it with other MIR. The main difference being that combine doesn’t have to constrain to register classes (unless it wants to) while select does.

With that in mind, I was thinking that it makes sense to put a lot of effort into the optimization of the tablegen-erated selection table (as has been started in Quentin’s recent patch) and then re-use it for combines too. We’ll need to be careful how we define GlobalISel’s counterpart to SelectionDAG patterns to make it expressive enough to support combines but that’s essentially a second frontend (the other being the SelectionDAG importer) on a common backend

Req 2 becomes simple to implement in this approach. You can either use the existing feature-bits mechanism to enable/disable combine rules as a group, or add an equivalent mechanism in tablegen to decide whether a rule makes it into the emitted table or not and have multiple tables which you can run/not-run at will. With the new coverage feedback mechanism, we could potentially organize our tables semi-automatically by highlighting combine rules that never or rarely fire in a particular pass.

One feature I think we ought to have that isn’t on the requirements list already, is that I think we should have a means to support rules with more than one match root. For example (using SelectionDAG patterns):
(set $dst1:GPR32, (i32 (load $ptr:GPR64)))
(set $dst2:GPR32, (i32 (load (add $ptr:GPR64 4))))
into:
(set $tmp:GPR64, (v2s32 (load $ptr:GPR64)))
(set $dst1, (extractelt $tmp:GPR64, 0))
(set $dst2, (extractelt $tmp:GPR64, 1))
Or something along those lines (such as fusing div/mod together). The combiner should be smart enough to make the root the $ptr, and follow the use of $ptr into the load/add, then follow the def to the 4.

I share this concern. The current DAGCombine, being constructed on top of SDAG, has a kind of built-in CSE and automatic DCE. How will things change, if they’ll change, in this new model?

Hi Hal,

I suspect one option is to have a separate CSE pass, and the backends get to choose where exactly they plug in their pipeline. I think DCE should be part of the combine pass (and the legalizer is about to start doing that as well).

The current DAGCombine, being constructed on top of SDAG, has a kind of built-in CSE and automatic DCE. How will things change, if they’ll change, in this new model?

Hi Hal,

I suspect one option is to have a separate CSE pass, and the backends get to choose where exactly they plug in their pipeline. I think DCE should be part of the combine pass (and the legalizer is about to start doing that as well).

For SSA form MIR there’s already the MachineCSE pass. How important the CSE/DCE is at the combine stage I don’t know. As an approximation perhaps we can get an idea by disabling the behavior in the DAGCombiner and seeing the effects.

My thinking on this is that (with a few exceptions that I’ll get to), combine and select are basically the same thing. You match some MIR, and replace it with other MIR. The main difference being that combine doesn’t have to constrain to register classes (unless it wants to) while select does.

With that in mind, I was thinking that it makes sense to put a lot of effort into the optimization of the tablegen-erated selection table (as has been started in Quentin’s recent patch) and then re-use it for combines too. We’ll need to be careful how we define GlobalISel’s counterpart to SelectionDAG patterns to make it expressive enough to support combines but that’s essentially a second frontend (the other being the SelectionDAG importer) on a common backend

Agreed that combine and selection are similar processes. It sounds like this is something we should look at prototyping.

Req 2 becomes simple to implement in this approach. You can either use the existing feature-bits mechanism to enable/disable combine rules as a group, or add an equivalent mechanism in tablegen to decide whether a rule makes it into the emitted table or not and have multiple tables which you can run/not-run at will. With the new coverage feedback mechanism, we could potentially organize our tables semi-automatically by highlighting combine rules that never or rarely fire in a particular pass.

One feature I think we ought to have that isn’t on the requirements list already, is that I think we should have a means to support rules with more than one match root. For example (using SelectionDAG patterns):
(set $dst1:GPR32, (i32 (load $ptr:GPR64)))
(set $dst2:GPR32, (i32 (load (add $ptr:GPR64 4))))
into:
(set $tmp:GPR64, (v2s32 (load $ptr:GPR64)))
(set $dst1, (extractelt $tmp:GPR64, 0))
(set $dst2, (extractelt $tmp:GPR64, 1))
Or something along those lines (such as fusing div/mod together). The combiner should be smart enough to make the root the $ptr, and follow the use of $ptr into the load/add, then follow the def to the 4.

This seems like a nice feature, but I wonder about the impact this will have on the speed of the matching algorithm. I don’t know enough about it to say though. IMO complex features can be done in C++ code if they’re uncommon, in preference for fast handling of the common cases. Maybe a few more use cases are needed.

Thanks,
Amara

My impression is that the automated CSE and DCE is very important to the current implementation. There’s a lot of code that depends on his happening in order to have the expected effects. Otherwise, the use-count checks won’t do the right thing (because the old unused uses of things won’t immediately go away). I’m not entirely sure you can just turn off the uniquing in SDAG and get a sensible result. -Hal

IIRC Aditya turned off CSE in SDISel for experimentation purposes (we wanted to quantify how useful that is), and the results were surprising in the sense that it pretty much didn’t affect the quality of the generated code.

Aditya, could you share what you found when you did this experiment?
Admittedly, this was a while and I may misremember!

IIRC Aditya turned off CSE in SDISel for experimentation purposes (we wanted to quantify how useful that is), and the results were surprising in the sense that it pretty much didn’t affect the quality of the generated code.

Aditya, could you share what you found when you did this experiment?
Admittedly, this was a while and I may misremember!

IIRC (from a while back), I made doNotCSE method return true always for a small set of tests and noticed based on crude metrics that there wasn’t a major performance degradation (~2-3% increase in instructions). From what I hear now, I suspect that alone is not sufficient to completely turn off CSE.

Hi Amara,

Hi everyone,

This RFC concerns the design and architecture of a generic machine instruction combiner/optimizer framework to be developed as part of the GISel pipeline. As we transition from correctness and reducing the fallback rate to SelectionDAG at -O0, we’re now starting to think about using GlobalISel with optimizations enabled. There are obviously many parts to this story as optimizations happen at various stages of the codegen pipeline. The focus of this RFC is the replacement of the equivalent of the DAGCombiner in SDAG land. Despite the focus on the DAGCombiner, since there aren’t perfect 1-1 mappings between SDAG and GlobalISel components, this may also include features that are currently implemented as part of the target lowerings, and tablegen isel patterns. As we’re starting from a blank slate, we have an opportunity here to think about what we might need from such a framework without the legacy cruft (although we still have the high performance bar to meet).

I want to poll the community about what future requirements we have for the GISel G_MI optimizer/combiner. The following are the general requirements we have so far:

  1. It should have at least equivalent, but hopefully better runtime/compile time trade off than the DAGCombiner.
  2. There needs to be flexibility in the design to allow targets to run subsets of the overall optimizer. For example, some targets may want to avoid trying to run certain types of optimizations like vector or FP combines if they’re either not applicable, or not worth the compile time.
  3. Have a reasonably concise way to write most optimizations. Hand written C++ will always be an option, but there’s value in having easy to read and reason about descriptions of transforms.

These requirements aren’t set in stone nor complete, but using them as a starting point: a single monolithic “Generic MI combiner” component doesn’t look like the right approach. Our current thinking is that, like we’ve done with the Legalizer, the specific mechanics of the actual optimization should be separated into it’s own unit. This would allow the combines to be re-used at different stages of the pipeline according to target needs. Using the current situation with instcombine as an example, there is no way to explicitly pick and choose a specific subset of IC, it’s only available as a whole pass with all the costs that entails.

The reasoning behind req 3 is that there may be compile time savings available if we can describe in a declarative style the combines we want to do, like it’s currently possible with tablegen patterns. This hasn’t been proven out yet, but consider an alternative where we use the machine instruction equivalent of the IR/PatternMatch tooling which allows easy and expressive matching of IR sub-trees. A concern I have with using that as the main approach to writing combines is that it’s easy to add new matchers in an routine which re-computes information that’s previously been computed in previous match() attempts. This form of back-tracking might be avoided if we can reason about a group of combines together automatically (or perhaps we could add caching capabilities to PatternMatch).

What would everyone else like to see from this?

It would be great to provide first-class support for maintaining debug value information as a part of the new combine framework.

With SelectionDAG, we don’t have a systematic way of preserving debug locations and values across combines. This is a source of optimized debugging bugs. If, as a part of the new framework, we could concisely express that a RAUW-style combine simply transfers debug values from A to B, we might define away some of these bugs [1].

Adrian put in place some infrastructure to do this in SelectionDAG (r317825). However, auditing/fixing debug value transfer issues in hand-written combines is time-consuming. I think it should be a goal of the new framework to make this a bit easier.

best,
vedant

[1] To pick one at random, the ‘(zext (zextload x)) → (zext (truncate (zextload x)))’ combine should transfer debug values from N to the new ExtLoad, and from N0 to the new (trunc ExtLoad), but currently doesn’t.

Hi Amara,

Hi everyone,

This RFC concerns the design and architecture of a generic machine instruction combiner/optimizer framework to be developed as part of the GISel pipeline. As we transition from correctness and reducing the fallback rate to SelectionDAG at -O0, we’re now starting to think about using GlobalISel with optimizations enabled. There are obviously many parts to this story as optimizations happen at various stages of the codegen pipeline. The focus of this RFC is the replacement of the equivalent of the DAGCombiner in SDAG land. Despite the focus on the DAGCombiner, since there aren’t perfect 1-1 mappings between SDAG and GlobalISel components, this may also include features that are currently implemented as part of the target lowerings, and tablegen isel patterns. As we’re starting from a blank slate, we have an opportunity here to think about what we might need from such a framework without the legacy cruft (although we still have the high performance bar to meet).

I want to poll the community about what future requirements we have for the GISel G_MI optimizer/combiner. The following are the general requirements we have so far:

  1. It should have at least equivalent, but hopefully better runtime/compile time trade off than the DAGCombiner.
  2. There needs to be flexibility in the design to allow targets to run subsets of the overall optimizer. For example, some targets may want to avoid trying to run certain types of optimizations like vector or FP combines if they’re either not applicable, or not worth the compile time.
  3. Have a reasonably concise way to write most optimizations. Hand written C++ will always be an option, but there’s value in having easy to read and reason about descriptions of transforms.

These requirements aren’t set in stone nor complete, but using them as a starting point: a single monolithic “Generic MI combiner” component doesn’t look like the right approach. Our current thinking is that, like we’ve done with the Legalizer, the specific mechanics of the actual optimization should be separated into it’s own unit. This would allow the combines to be re-used at different stages of the pipeline according to target needs. Using the current situation with instcombine as an example, there is no way to explicitly pick and choose a specific subset of IC, it’s only available as a whole pass with all the costs that entails.

The reasoning behind req 3 is that there may be compile time savings available if we can describe in a declarative style the combines we want to do, like it’s currently possible with tablegen patterns. This hasn’t been proven out yet, but consider an alternative where we use the machine instruction equivalent of the IR/PatternMatch tooling which allows easy and expressive matching of IR sub-trees. A concern I have with using that as the main approach to writing combines is that it’s easy to add new matchers in an routine which re-computes information that’s previously been computed in previous match() attempts. This form of back-tracking might be avoided if we can reason about a group of combines together automatically (or perhaps we could add caching capabilities to PatternMatch).

What would everyone else like to see from this?

It would be great to provide first-class support for maintaining debug value information as a part of the new combine framework.

With SelectionDAG, we don’t have a systematic way of preserving debug locations and values across combines. This is a source of optimized debugging bugs. If, as a part of the new framework, we could concisely express that a RAUW-style combine simply transfers debug values from A to B, we might define away some of these bugs [1].

Adrian put in place some infrastructure to do this in SelectionDAG (r317825). However, auditing/fixing debug value transfer issues in hand-written combines is time-consuming. I think it should be a goal of the new framework to make this a bit easier.

+1

Do you and/or Adrian also have thought on testing debug values? Verification and testing strategies should/could be part of the design also.

Hi Amara,

Hi everyone,

This RFC concerns the design and architecture of a generic machine instruction combiner/optimizer framework to be developed as part of the GISel pipeline. As we transition from correctness and reducing the fallback rate to SelectionDAG at -O0, we’re now starting to think about using GlobalISel with optimizations enabled. There are obviously many parts to this story as optimizations happen at various stages of the codegen pipeline. The focus of this RFC is the replacement of the equivalent of the DAGCombiner in SDAG land. Despite the focus on the DAGCombiner, since there aren’t perfect 1-1 mappings between SDAG and GlobalISel components, this may also include features that are currently implemented as part of the target lowerings, and tablegen isel patterns. As we’re starting from a blank slate, we have an opportunity here to think about what we might need from such a framework without the legacy cruft (although we still have the high performance bar to meet).

I want to poll the community about what future requirements we have for the GISel G_MI optimizer/combiner. The following are the general requirements we have so far:

  1. It should have at least equivalent, but hopefully better runtime/compile time trade off than the DAGCombiner.
  2. There needs to be flexibility in the design to allow targets to run subsets of the overall optimizer. For example, some targets may want to avoid trying to run certain types of optimizations like vector or FP combines if they’re either not applicable, or not worth the compile time.
  3. Have a reasonably concise way to write most optimizations. Hand written C++ will always be an option, but there’s value in having easy to read and reason about descriptions of transforms.

These requirements aren’t set in stone nor complete, but using them as a starting point: a single monolithic “Generic MI combiner” component doesn’t look like the right approach. Our current thinking is that, like we’ve done with the Legalizer, the specific mechanics of the actual optimization should be separated into it’s own unit. This would allow the combines to be re-used at different stages of the pipeline according to target needs. Using the current situation with instcombine as an example, there is no way to explicitly pick and choose a specific subset of IC, it’s only available as a whole pass with all the costs that entails.

The reasoning behind req 3 is that there may be compile time savings available if we can describe in a declarative style the combines we want to do, like it’s currently possible with tablegen patterns. This hasn’t been proven out yet, but consider an alternative where we use the machine instruction equivalent of the IR/PatternMatch tooling which allows easy and expressive matching of IR sub-trees. A concern I have with using that as the main approach to writing combines is that it’s easy to add new matchers in an routine which re-computes information that’s previously been computed in previous match() attempts. This form of back-tracking might be avoided if we can reason about a group of combines together automatically (or perhaps we could add caching capabilities to PatternMatch).

What would everyone else like to see from this?

It would be great to provide first-class support for maintaining debug value information as a part of the new combine framework.

With SelectionDAG, we don’t have a systematic way of preserving debug locations and values across combines. This is a source of optimized debugging bugs. If, as a part of the new framework, we could concisely express that a RAUW-style combine simply transfers debug values from A to B, we might define away some of these bugs [1].

Adrian put in place some infrastructure to do this in SelectionDAG (r317825). However, auditing/fixing debug value transfer issues in hand-written combines is time-consuming. I think it should be a goal of the new framework to make this a bit easier.

+1

Do you and/or Adrian also have thought on testing debug values? Verification and testing strategies should/could be part of the design also.

I have two ideas on how to approach testing.

  1. Recycle existing targeted lit tests to test debug info preservation.

Consider a targeted lit test which look like this:
RUN: opt -S -loop-reduce addrec-gep.ll -o - | FileCheck %s

We can repurpose this test by attaching synthetic debug information to the IR, and then checking how much of it survives LSR. I prototyped this idea for IR-level tests over the break. Adding a debug info test looks like this:
RUN: opt -S -debugify -loop-reduce -check-debugify addrec-gep.ll -o - | FileCheck …

The check-debugify pass can determine which DILocations and DIVariables went missing:

CheckDebugify: Instruction with empty DebugLoc – %lsr.iv1 = bitcast double* %lsr.iv to i1*
CheckDebugify: Missing line 3
CheckDebugify: Missing line 4

CheckDebugify: Missing line 33

This could be a handy way to create new targeted test cases at the IR/MIR level. Something like this would have helped triage issues like llvm.org/PR25630 as well.

  1. Assert that combines preserve debug info.

If there’s an important API like “CombineTo(From, To)”, it could be useful to assert that the To node has at least as much debug info as the From node. I’m experimenting with this in SelectionDAG (llvm.org/PR35338). I don’t yet know where these asserts belong, how strict they should be, or if they need exception lists for certain combines.

Thanks for the suggestions Vedant. Synthetic debug info is an interesting idea that sounds worthwhile. Could this be implemented as a “wrapper” pass that automatically decorates debug info before and after a specific pass run in opt (or pipeline of passes)? It might be useful to be able to easily enable this for a wide range of tests without having to manually modify each run line, perhaps as an environment variable/build time flag.

Cheers,
Amara

I like this idea too but I’d like to see it work in the backend passes as well (like -verify-machineinstrs). It doesn’t necessarily tell you if the information ends up in the right place but I think that detecting the loss is likely to be just as good in a fair portion of the backend (e.g. ISel when emitting one instruction) and when it isn’t, it’s still a good start. The one thing it wouldn’t detect is when information is preserved but put in the wrong place.

I like this idea too but I’d like to see it work in the backend passes as well (like -verify-machineinstrs).

  • 1, I haven’t prototyped this because I don’t know exactly what it would look like, but I’ve made a note about it.

It doesn’t necessarily tell you if the information ends up in the right place but I think that detecting the loss is likely to be just as good in a fair portion of the backend (e.g. ISel when emitting one instruction) and when it isn’t, it’s still a good start. The one thing it wouldn’t detect is when information is preserved but put in the wrong place.

Right, showing that debug info is preserved is good, but it doesn’t show that the info is preserved correctly. I’m not sure how to create tests for that in an automated way. One idea is to add asserts where possible. For example, it might be correct/useful to assert that in CombineTo(A, B), the debug locations for A and B are the same.

Thanks for the suggestions Vedant. Synthetic debug info is an interesting idea that sounds worthwhile. Could this be implemented as a “wrapper” pass that automatically decorates debug info before and after a specific pass run in opt (or pipeline of passes)? It might be useful to be able to easily enable this for a wide range of tests without having to manually modify each run line, perhaps as an environment variable/build time flag.

Enabling this sort of testing for a wide range of tests sounds useful. I’ll take a stab at adding an option and environment variable to opt to enable this as a follow-up to D40512.

thanks,
vedant