[AVR] [MSP430] Code gen improvements for 8 bit and 16 bit targets

Hi All,

While implementing a custom 16 bit target for academical and demonstration purposes, I unexpectedly found that LLVM was not really ready for 8 bit and 16 bit targets. Let me expose why.

Target backends can be divided into two major categories, with essentially nothing in between:

Type 1: The big 32 or 64 bit targets. Heavily pipelined with expensive branches, running at clock frequencies up to the GHZ range. Aimed at workstations, computers or smartphones. For example PowerPC, x86 and ARM.

Type 2: The 8 or 16 bit targets. Non-pipelined processors, running at frequencies on the MHz range, generally fast access to memory, aimed at the embedded marked or low consumption applications (they are virtually everywhere). LLVM currently implements an experimental AVR target and the MSP430.

LLVM does a great for Type 1 targets, but it can be improved for Type 2 targets.

The essential target feature that makes one way of code generation better for either type 1 or type 2 targets, is pipelining. For type 1 we want branching to be avoided for as much as possible. Turning branching code into sequential instructions with the execution of speculative code is advantageous. These targets have instruction sets that help with that goal, in particular cheap ‘shifts’ and ‘cmove' type instructions.

Type 2 targets, on the contrary, have cheap branching. Their instruction set is not particularly designed to assist branching avoidance because that’s not required. In fact, branching on these targets is often desirable, as opposed to transforms creating expensive speculative execution. ‘Shifts’ are only one-single-bit, and conditional execution instructions other than branches are not available.

The current situation is that some LLVM target-independent optimisations are not really that ‘independent' when we bring type 2 targets into the mix. Unfortunately, LLVM was apparently designed with type 1 targets in mind alone, which causes degraded code for type 2 targets. In relation to this, I posted a couple of bug reports that show some of these issues:

https://bugs.llvm.org/show_bug.cgi?id=43542
https://bugs.llvm.org/show_bug.cgi?id=43559

The first bug is already fixed by somebody who also suggested me to raise this subject on the llvm-dev mailing list, which I’m doing now.

Incidentally, most code degradations happen on the DAGCombine code. It’s a bug because LLVM may create transforms into instructions that are not Legal for some targets. Such transforms are detrimental on those targets. This bug won't show for most targets, but it is nonetheless particularly affecting targets with no native shifts support. The bug consists on the transformation of already relatively cheap code to expensive one. The fix prevents that.

Still, although the above DAGCombine code gets fixed, the poor code generation issue will REMAIN. In fact, the same kind of transformations are performed earlier as part of the IR optimisations, in the InstCombine pass. The result is that the IR /already/ incorporates the undesirable transformations for type 2 targets, which DAGCombine can't do anything about.

At this point, reverse pattern matching looks as the obvious solution, but I think it’s not the right one because that would need to be implemented on every single current or future (type 2) target. It is also difficult to get rid of undesired transforms when they carry complexity, or are the result or consecutive combinations. Delegating the whole solution to only reverse pattern matching code, will just perpetuate the overall problem, which will continue affecting future target developments. Some reverse pattern matching is acceptable and desirable to deal with very specific target features, but not as a global solution to this problem.

On a previous email, a statement was posted that in recent years attempts have been made to remove code from InstCombine and port it to DAGCombiner. I agree that this is a good thing to do, but it was reportedly difficult and associated with potential problems or unanticipated regressions. I understand those concerns and I acknowledge the involved work as challenging. However, in order to solve the presented problem, some work is still required in InstCombine.

Therefore, I wondered if something in between could still be done, so this is my proposal: There are already many command line compiler options that modify IR output in several ways. Some options are even target dependent, and some targets even explicitly set them (In RenderTargetOptions). The InstCombine code, has itself its own small set of options, for example "instcombine-maxarray-size” or "instcombine-code-sinking”. Command line compiler options produce functionally equivalent IR output, while respecting stablished canonicalizations. In all cases, the output is just valid IR code in a proper form that depends on the selected options. As an example -O0 produces a very different output than -O3, or -Os, all of them are valid as the input to any target backend. My suggestion would be to incorporate a compiler option acting on the InstCombine pass. The option would improve IR code aimed at Type 2 targets. Of course, this option would not be enabled by default so the IR output would remain exactly as it is today if not explicitly enabled.

What this option would need to do in practice is really easy and straightforward. Just bypassing (avoiding) certain transformations that might be considered harmful for targets benefiting from it. I performed some simple tests, specially directed at the InstCombineSelect transformations, and I found them to work great and generating greatly improved code for both the MSP430 and AVR targets.

Now, I am aware that this proposal might come a bit unexpected and even regarded as inelegant or undesirable, but maybe after some careful balancing of pros and cons, it is just what we need to do, if we really care about LLVM as a viable platform for 8 and 16 bit targets. As stated earlier, It’s easy to implement, it’s just an optional compiler setting not affecting major targets at all, and the future extend of it can be gradually defined or agreed upon as it is put into operation. Any views would be appreciated.

John.

From: llvm-dev <llvm-dev-bounces@lists.llvm.org> On Behalf Of Joan Lluch
via llvm-dev
Sent: Monday, October 07, 2019 6:22 PM
To: llvm-dev <llvm-dev@lists.llvm.org>
Subject: [llvm-dev] [AVR] [MSP430] Code gen improvements for 8 bit and 16
bit targets

Hi All,

While implementing a custom 16 bit target for academical and demonstration
purposes, I unexpectedly found that LLVM was not really ready for 8 bit
and 16 bit targets. Let me expose why.

Target backends can be divided into two major categories, with essentially
nothing in between:

Type 1: The big 32 or 64 bit targets. Heavily pipelined with expensive
branches, running at clock frequencies up to the GHZ range. Aimed at
workstations, computers or smartphones. For example PowerPC, x86 and ARM.

Type 2: The 8 or 16 bit targets. Non-pipelined processors, running at
frequencies on the MHz range, generally fast access to memory, aimed at
the embedded marked or low consumption applications (they are virtually
everywhere). LLVM currently implements an experimental AVR target and the
MSP430.

LLVM does a great for Type 1 targets, but it can be improved for Type 2
targets.

The essential target feature that makes one way of code generation better
for either type 1 or type 2 targets, is pipelining. For type 1 we want
branching to be avoided for as much as possible. Turning branching code
into sequential instructions with the execution of speculative code is
advantageous. These targets have instruction sets that help with that
goal, in particular cheap ‘shifts’ and ‘cmove' type instructions.

Type 2 targets, on the contrary, have cheap branching. Their instruction
set is not particularly designed to assist branching avoidance because
that’s not required. In fact, branching on these targets is often
desirable, as opposed to transforms creating expensive speculative
execution. ‘Shifts’ are only one-single-bit, and conditional execution
instructions other than branches are not available.

The current situation is that some LLVM target-independent optimisations
are not really that ‘independent' when we bring type 2 targets into the
mix. Unfortunately, LLVM was apparently designed with type 1 targets in
mind alone, which causes degraded code for type 2 targets. In relation to
this, I posted a couple of bug reports that show some of these issues:

https://bugs.llvm.org/show_bug.cgi?id=43542
https://bugs.llvm.org/show_bug.cgi?id=43559

The first bug is already fixed by somebody who also suggested me to raise
this subject on the llvm-dev mailing list, which I’m doing now.

Incidentally, most code degradations happen on the DAGCombine code. It’s a
bug because LLVM may create transforms into instructions that are not
Legal for some targets. Such transforms are detrimental on those targets.
This bug won't show for most targets, but it is nonetheless particularly
affecting targets with no native shifts support. The bug consists on the
transformation of already relatively cheap code to expensive one. The fix
prevents that.

Still, although the above DAGCombine code gets fixed, the poor code
generation issue will REMAIN. In fact, the same kind of transformations
are performed earlier as part of the IR optimisations, in the InstCombine
pass. The result is that the IR /already/ incorporates the undesirable
transformations for type 2 targets, which DAGCombine can't do anything
about.

At this point, reverse pattern matching looks as the obvious solution, but
I think it’s not the right one because that would need to be implemented
on every single current or future (type 2) target. It is also difficult to
get rid of undesired transforms when they carry complexity, or are the
result or consecutive combinations. Delegating the whole solution to only
reverse pattern matching code, will just perpetuate the overall problem,
which will continue affecting future target developments. Some reverse
pattern matching is acceptable and desirable to deal with very specific
target features, but not as a global solution to this problem.

On a previous email, a statement was posted that in recent years attempts
have been made to remove code from InstCombine and port it to DAGCombiner.
I agree that this is a good thing to do, but it was reportedly difficult
and associated with potential problems or unanticipated regressions. I
understand those concerns and I acknowledge the involved work as
challenging. However, in order to solve the presented problem, some work
is still required in InstCombine.

Therefore, I wondered if something in between could still be done, so this
is my proposal: There are already many command line compiler options that
modify IR output in several ways. Some options are even target dependent,
and some targets even explicitly set them (In RenderTargetOptions). The
InstCombine code, has itself its own small set of options, for example
"instcombine-maxarray-size” or "instcombine-code-sinking”. Command line
compiler options produce functionally equivalent IR output, while
respecting stablished canonicalizations. In all cases, the output is just
valid IR code in a proper form that depends on the selected options. As an
example -O0 produces a very different output than -O3, or -Os, all of them
are valid as the input to any target backend. My suggestion would be to
incorporate a compiler option acting on the InstCombine pass. The option
would improve IR code aimed at Type 2 targets. Of course, this option
would not be enabled by default so the IR output would remain exactly as
it is today if not explicitly enabled.

An option is certainly one way to get this effect; another would be to
add some sort of target-specific query, which would drive the same choices
in the IR transforms. TargetTransformInfo appears to be full of these
sorts of queries.
--paulr

Hi Paul,

TargetTransformInfo hooks are fine, as are the TargetLowering ones, to customise backend code. They would certainly add flexibility compared with relying on instruction Legality alone, and I would up-vote them along with the addition of the missing legality checks in DAGCombine. However, we shouldn’t apply any target specific code to the frontend optimisations, because frontend code is supposed to be mostly target-independent, and strong dependence on targets is not desirable. This is why I proposed it the way I did.

John

Hi All,

In relation to the subject of this message I got my first round of patches successfully reviewed and committed. As a matter of reference, they are the following:

https://reviews.llvm.org/D69116
https://reviews.llvm.org/D69120
https://reviews.llvm.org/D69326
https://reviews.llvm.org/D70042

They provided hooks in TargetLowering and DAGCombine that enable interested targets to implement a filter for expensive shift operations. The patches work by preventing certain transformations that would result in expensive code for these targets.

I want to express my gratitude to the LLVM community and particularly to members @spatel and @asl who have directly followed, helped with, and reviewed these patches.

This is half of what’s required to get the full benefits. As I exposed before, in order to get this fully functional, we need to do some work on InstCombine. This is because some of the transformations that we want to avoid, are created earlier in InstCombine, thus deactivating the patches above.

My general proposal when I started this (quoted below for reference), was to implement a command line option that would act on InstCombine by bypassing (preventing) certain transformations. I still think that this is the easier and safer way to obtain the desired goals, but I want to subject that to the consideration of the community again to make sure I am on the right track.

My current concrete proposal is to add a command line option (boolean) that I would name “enable-shift-relaxation” or just “relax-shifts”. This option would act in several places in InstCombineCasts and in InstCombineSelect with the described effects.

I also need to ask about the best way to present tests cases for that. I learned how to create test files for codegen transforms (IR to Assembly), but now I will be working on the “target Independent” side. For my internal work, I have manually been testing C-code to IR generation, but I do not know how to create proper test cases for the llvm project. Any help on this would be appreciated.

Thanks in advance

John

Hi All,

In relation to the subject of this message I got my first round of patches successfully reviewed and committed. As a matter of reference, they are the following:

https://reviews.llvm.org/D69116
https://reviews.llvm.org/D69120
https://reviews.llvm.org/D69326
https://reviews.llvm.org/D70042

They provided hooks in TargetLowering and DAGCombine that enable interested targets to implement a filter for expensive shift operations. The patches work by preventing certain transformations that would result in expensive code for these targets.

I want to express my gratitude to the LLVM community and particularly to members @spatel and @asl who have directly followed, helped with, and reviewed these patches.

This is half of what’s required to get the full benefits. As I exposed before, in order to get this fully functional, we need to do some work on InstCombine. This is because some of the transformations that we want to avoid, are created earlier in InstCombine, thus deactivating the patches above.

My general proposal when I started this (quoted below for reference), was to implement a command line option that would act on InstCombine by bypassing (preventing) certain transformations. I still think that this is the easier and safer way to obtain the desired goals, but I want to subject that to the consideration of the community again to make sure I am on the right track.

My current concrete proposal is to add a command line option (boolean) that I would name “enable-shift-relaxation” or just “relax-shifts”. This option would act in several places in InstCombineCasts and in InstCombineSelect with the described effects.

I'm not really sold on this part, for the reasons previously discussed.

This is only going to avoid creating such shifts, in passes that will
be adjusted.
This will not completely ban such shifts, meaning they still can exist.
Which means this will only partially prevent 'degrading' existing IR.
What about the ones that were already present in the original input
(from C code, e.g.)?

I think you just want to add an inverse set of DAGCombine transforms,
also guarded with that target hook you added. That way there's no chance
to still end up with unfavorable shifts on your target, and no middle-end
impact from having more than one canonical representation.

I also need to ask about the best way to present tests cases for that. I learned how to create test files for codegen transforms (IR to Assembly), but now I will be working on the “target Independent” side. For my internal work, I have manually been testing C-code to IR generation, but I do not know how to create proper test cases for the llvm project. Any help on this would be appreciated.

Thanks in advance

John

Roman

Hi Roman,

Thanks for your input.

The subject of reverse transformations was discussed before (it’s even mentioned on my reference message below) and I think there was a general agreement that it’s best to avoid reversals if the issue can be dealt with in a better way from the origins. I also understood that there was a general support about /moving/ as much transformations as possible from InstCombine to DAGCombine, although this is a major goal and not the subject of this.

This proposal does not aim to remove all shifts, this is just not possible or even desirable. All targets have shifts, however large amount shifts can be particularly expensive for some targets, as they require a large sequence of instructions to complete them.

We only want to prevent NEW shifts that are emitted as a consequence of transformations. LLVM tends to act too easily at creating new shifts in circumstances where they are not desirable for some targets. We just want to improve on this.

Finally, this does not aim at all to create a different canonical representation. This was also mentioned on the reference message.

I understood from the very beginning that this proposal could be controversial, and I still think that the ultimate solution would be to move a lot of InstCombine into DAGCombine. However, the latter is a major goal with strong impacts on all targets that would require really strong support and hard work from many community members. I’m advocating for something in the middle that would solve the problem for the affected targets with ZERO impact for the non-affected ones.

I hope this helps to clarify it.

Thanks again,

John

As before, I’m not convinced that we want to allow target-based enable/disable in instcombine for performance. That undermines having a target-independent canonical form in the 1st place.

It’s not clear to me what the remaining motivating cases look like. If you could post those here or as bugs, I think you’d have a better chance of finding an answer.

Let’s take a minimal example starting in C and compiling for MSP430 since that’s what we have used as a public approximation of your target:

short isNotNegativeUsingBigShift(short x) {
return (unsigned short)(~x) >> 15;
}

short isNotNegativeUsingCmp(short x) {
return x > -1;
}

Currently, we will canonicalize these to the shift form (but you could argue that is backwards).

Alive proof for logical equivalence:
https://rise4fun.com/Alive/uGH

If we disable the instcombine for this, we would have IR like this:

define signext i16 @isNotNegativeUsingShift(i16 signext %x) {
%signbit = lshr i16 %x, 15
%r = xor i16 %signbit, 1
ret i16 %r
}

define signext i16 @isNotNegativeUsingCmp(i16 signext %x) {
%cmp = icmp sgt i16 %x, -1
%r = zext i1 %cmp to i16
ret i16 %r
}

And compile that for MSP430:
$ ./llc -o - -mtriple=msp430 shift.ll
isNotNegativeUsingShift: ; @isNotNegativeUsingShift
; %bb.0:
inv r12
swpb r12
mov.b r12, r12
clrc
rrc r12
rra r12
rra r12
rra r12
rra r12
rra r12
rra r12
ret

isNotNegativeUsingCmp: ; @isNotNegativeUsingCmp
; %bb.0:
mov r12, r13
mov #1, r12
tst r13
jge .LBB1_2
; %bb.1:
clr r12
.LBB1_2:
ret

How do you intend to optimize code that is written in the 1st form? Or is that not allowed in some way?

Hi Spatel,

Thanks for that.

Well, ultimately, my preferred approach would be that the canonical form was the icmp, not the shift. There are already cases where C code shifts are converted into icmp, such as this one:

void isNotNegativeUsingBigShift_r(short x, short *r) {
if ((unsigned short)(~x) >> 15 ) *r = 1;
}

this gets compiled into this:

define void @isNotNegativeUsingBigShift_r(i16 signext %x, i16* nocapture %r) {
entry:
%tobool = icmp sgt i16 %x, -1
br i1 %tobool, label %if.then, label %if.end

if.then: ; preds = %entry
store i16 1, i16* %r, align 2, !tbaa !2
br label %if.end

if.end: ; preds = %if.then, %entry
ret void
}

I think that icmps and selects are more “target independent” than shifts. Instead of in InstCombine, transforms into shifts would be created in DAGCombine and TargetLowering, which already incorporate some quite aggressive transforms.

As an ‘experiment’, I ran a small number of cases to see how well DAGCombine deals with code that is normally converted into shifts by InstCombine. I mean, I disabled InstCombine for these cases and watched what the DAGCombine is able to do with the resulting IR. Well, although DAGCombine is able to create some shifts by itself, it does not currently handle some of the cases normally handled by InstCombine, so the reliance on InstCombine to get some shifts emitted is still strong.

I understand the objection to an instCombine option. I truly understand that, seriously. And I kind of dislike that option too, which is why I am trying to openly expose my reasons for that.

As per your question, the following are the ‘undesirable’ InstCombine transforms that I identified:

int testSimplifySetCC_0( int x ) // 904 (InstCombineCasts::transformZExtICmp)
{
return (x & 32) != 0;
}

define i16 @testSimplifySetCC_0(i16 %x) {
entry:
%and = lshr i16 %x, 5
%and.lobit = and i16 %and, 1
ret i16 %and.lobit
}

int testSExtICmp_0( int x ) // 1274 (InstCombineCasts:transformSExtICmp)
{
return (x & 32) ? -1 : 0;
}

define i16 @testSExtICmp_0(i16 %x) {
entry:
%0 = shl i16 %x, 10
%sext = ashr i16 %0, 15
ret i16 %sext
}

int testExtendSignBit_0( int x ) // 1239 (InstCombineCasts::transformSExtICmp)
{
return x<0 ? 0 : -1;
}

define i16 @testExtendSignBit_0(i16 %x) {
entry:
%x.lobit = ashr i16 %x, 15
%x.lobit.not = xor i16 %x.lobit, -1
ret i16 %x.lobit.not
}

int testExtendSignBit_1( int x ) // 861 (InstCombineCasts::transformZExtICmp)
{
return x>-1 ? 1 : 0;
}

define i16 @testExtendSignBit_1(i16 %x) {
entry:
%x.lobit = lshr i16 %x, 15
%x.lobit.not = xor i16 %x.lobit, 1
ret i16 %x.lobit.not
}

int testShiftAnd_1( int x ) // 132 (InstCombineSelect foldSelectICmpAnd)
{
return x<0 ? 2 : 0;
}

define i16 @testShiftAnd_1(i16 %x) {
entry:
%0 = lshr i16 %x, 14
%1 = and i16 %0, 2
ret i16 %1
}

int foldSelectICmpAndOr( int x, int y ) // 600 (InstCombineSelect foldSelectICmpAndOr)
{
return (x & 2048) ? (y | 2) : y;
}

define i16 @foldSelectICmpAndOr(i16 %x, i16 %y) {
entry:
%and = lshr i16 %x, 10
%0 = and i16 %and, 2
%1 = or i16 %0, %y
ret i16 %1
}

There are possibly a few more, and there are also variations of the same, but the above should be the most common ones.

Thanks,

John

For any of the examples shown below, if the logical equivalent using cmp + other IR instructions is no more than the number of IR instructions as the variant that uses shift, we should consider reversing the canonicalization.
To make that happen, you would need to show that at least the minimal cases have codegen that is equal or better using the cmp form for at least a few in-tree targets. My guess is that we already have DAGCombine code that handles some of these. Then, you would need to reverse the transform in instcombine and see what happens in the regression tests there (it will probably expose missing transforms in instcombine).

Hi Sanjay,

Please see my comments below

For any of the examples shown below, if the logical equivalent using cmp + other IR instructions is no more than the number of IR instructions as the variant that uses shift, we should consider reversing the canonicalization.

I understand that by “reversing a canonicalisation” you mean replacing it by a different one in InstCombine. Is this what you mean?.
Only in a very few cases this is possible. The one that you showed seem to be one of them, but they are not the majority. If the criteria for “canonicalization” is minimal number of IR instructions, then it’s not possible to reverse all the transforms to shifts in InstCombine. My suggestion would be to define a set of concrete rules what would define what’s a “canonical” form, other than minimal number or IR.

To make that happen, you would need to show that at least the minimal cases have codegen that is equal or better using the cmp form for at least a few in-tree targets. My guess is that we already have DAGCombine code that handles some of these.

Unfortunately, my tests showed that DAGcombine does not currently handle this well. I found that there’s a number of InstCombine transforms that are not available in DAGCombine. This means that it is necessary to add new code to handle them in DAGCombine.

Then, you would need to reverse the transform in instcombine and see what happens in the regression tests there (it will probably expose missing transforms in instcombine).

I don’t understand this statement. I think it’s DAGCombine, not InstCombine, what suffers from missing transforms. Please can you clarify?

Finally, what you suggest is about what I have been stating as my preferred approach for long, and I searched support for it. Unfortunately, I can’t do this alone, I’m now disabled and this demands a lot more work than I can reasonably do. Test cases are implemented in ways that are tight and fragile, even simple changes will break them, which will add even more work. My intermediate solution would be ready and working in a couple of days, but if it can’t be accepted, then I don’t have another choice than kindly ask you to try to obtain some support from community members who would be willing to work on this. I will help with what I can, as I’m convinced that this would be a great improvement for LLVM and I have a strong interest on it, but I just can’t do it all.

Thanks,

John.

For any of the examples shown below, if the logical equivalent using cmp + other IR instructions is no more than the number of IR instructions as the variant that uses shift, we should consider reversing the canonicalization.

I understand that by “reversing a canonicalisation” you mean replacing it by a different one in InstCombine. Is this what you mean?.

That is correct. For example, remove this transform:
zext (icmp sgt X, -1) → xor (lshr X, BitWidth-1), 1

And at the same time, add this transform:
xor (lshr X, BitWidth-1), 1 → zext (icmp sgt X, -1)

Then, you would need to reverse the transform in instcombine and see what happens in the regression tests there (it will probably expose missing transforms in instcombine).

I don’t understand this statement. I think it’s DAGCombine, not InstCombine, what suffers from missing transforms. Please can you clarify?

If you agree with the above step of reversing the transform in InstCombine, then it is likely that we need 2 things before it can occur:

  1. Replicate the existing transform in DAGCombine, but guarded by a TLI predicate (as you’ve implemented for other patterns already).
  2. Determine what optimization patterns inside of InstCombine are broken by the new canonicalization. After you apply the above changes locally, this will show up as regressions in the unit tests under test/Transforms/InstCombine/ and possibly other passes.

Finally, what you suggest is about what I have been stating as my preferred approach for long, and I searched support for it. Unfortunately, I can’t do this alone, I’m now disabled and this demands a lot more work than I can reasonably do. Test cases are implemented in ways that are tight and fragile, even simple changes will break them, which will add even more work. My intermediate solution would be ready and working in a couple of days, but if it can’t be accepted, then I don’t have another choice than kindly ask you to try to obtain some support from community members who would be willing to work on this. I will help with what I can, as I’m convinced that this would be a great improvement for LLVM and I have a strong interest on it, but I just can’t do it all.

I have no way to tell in advance how much work is required to fix all of the problems, but my estimate is that each of the examples that you listed is about the same amount of work as what you have already accomplished in DAGCombiner.

To gain the help of others, I would again suggest that you file bug reports to show exactly what each problem is. If you can show that some public target would benefit, then it will certainly get more attention. If that target has many contributors/users (x86, ARM, etc), then others would almost certainly be willing to help fix things.