Remove zext-unfolding from InstCombine

Hi all,

I have a question regarding a transformation that is carried out in InstCombine, which has been introduced by r48715. It unfolds expressions of the form `zext(or(icmp, (icmp)))` to `or(zext(icmp), zext(icmp)))` to expose pairs of `zext(icmp)`. In a subsequent iteration these `zext(icmp)` pairs could then (possibly) be optimized by another optimization (which has already been there before r48715), to replace them by bitwise or integer operations. The code in question is located in `visitZExt()` in InstCombineCasts.cpp:

if (ICmpInst *ICI = dyn_cast<ICmpInst>(Src))
  return transformZExtICmp(ICI, CI);

BinaryOperator *SrcI = dyn_cast<BinaryOperator>(Src);
if (SrcI && SrcI->getOpcode() == Instruction::Or) {
  // zext (or icmp, icmp) --> or (zext icmp), (zext icmp) if at least one
  // of the (zext icmp) will be transformed.
  ICmpInst *LHS = dyn_cast<ICmpInst>(SrcI->getOperand(0));
  ICmpInst *RHS = dyn_cast<ICmpInst>(SrcI->getOperand(1));
  if (LHS && RHS && LHS->hasOneUse() && RHS->hasOneUse() &&
      (transformZExtICmp(LHS, CI, false) ||
       transformZExtICmp(RHS, CI, false))) {
    Value *LCast = Builder->CreateZExt(LHS, CI.getType(), LHS->getName());
    Value *RCast = Builder->CreateZExt(RHS, CI.getType(), RHS->getName());
    return BinaryOperator::Create(Instruction::Or, LCast, RCast);
  }
}

The first two lines do the actual `zext(icmp)` optimization. The remaining lines have been added by r48715 and do the mentioned unfolding. However, in `foldCastedBitwiseLogic()` there happens a reverse of the above unfolding, which transforms `logic(cast(A), cast(B))` to `cast(logic(A, B))`:

if ((!isa<ICmpInst>(Cast0Src) || !isa<ICmpInst>(Cast1Src)) &&
    shouldOptimizeCast(Cast0) && shouldOptimizeCast(Cast1)) {
  Value *NewOp = Builder->CreateBinOp(LogicOpc, Cast0Src, Cast1Src,
                                      I.getName());
  return CastInst::Create(CastOpcode, NewOp, DestTy);
}

The `(!isa<ICmpInst>(Cast0Src) || !isa<ICmpInst>(Cast1Src))` part has been added by r48715 in order to ensure that InstCombine does not enter an endless loop, switching back and forth between the two transformations. There was also added an InstCombine test case `zext-or-icmp.ll` that would enter such an endless loop, if this check would not be present. However, it also generally permits `foldCastedBitwiseLogic()` from folding operations with two icmps as operands. Something like

%1 = icmp sgt i64 %a, %b
%2 = zext i1 %1 to i8
%3 = icmp slt i64 %a, %c
%4 = zext i1 %3 to i8
%5 = and i8 %2, %4

would not be able to be transformed, even though the contained `zext icmp` pairs will also not be able to be optimized by `transformZExtICmp()` above, and the casts will survive. Therefore, in r275989 (https://reviews.llvm.org/D22511) I tried to narrow the check in `foldCastedBitwiseLogic()` to allow the folding of casts in the presence of icmps. For the above example this would result in:

%1 = icmp sgt i64 %a, %b
%2 = icmp slt i64 %a, %c
%3 = and i1 %1, %2
%4 = zext i1 %3 to i8

Despite `zext-or-icmp.ll` did not enter an endless loop with my change, it later turned out the assumptions that I took in r275989 were too optimistic and endless loops might still occur, which has been pointed out by Benjamin Kramer in http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160718/374347.html and it has therefore been reverted.

I apologize in advance for this possibly naive question, but I was wondering whether it is feasible to remove the unfolding introduced by r48715 in order to enable a cast-folding like on the example above. I've also already tested this and run all of LLVM's test cases via the `make check` target and it seems that the only test case relies on this unfolding is the mentioned `zext-or-icmp.ll`. I don't know if Evan Chang is still responsible for this code region, but since he is the original author of the commit, I thought it would be safe to include him here (I hope it's the right email address).

Thank you in advance for your thoughts on this!

Best regards,
Matthias

Hi Matthias,

I’ve been fixing similar problems recently (eg, https://llvm.org/bugs/show_bug.cgi?id=28476 ). I think your example (the test in zext-or-icmp.ll) is another missed application of DeMorgan’s Law:

char foo(char a, char b) {
return (((a & 1) == 0) | (b == 0));
}

char goo(char a, char b) {
return !(((a & 1) != 0) & (b != 0));
}

Make sure I didn’t botch that translation: the 1st is the C equivalent of zext-or-icmp.ll, and the 2nd is what we really want it to be optimized to instead of what’s shown in that test case.

define i8 @foo(i8 %a, i8 %b) {
%and = and i8 %a, 1
%cmp = icmp eq i8 %b, 0
%xor = xor i8 %and, 1
%zext = zext i1 %cmp to i8
%or = or i8 %zext, %xor
ret i8 %or
}

define i8 @goo(i8 %a, i8 %b) {
%cmp = icmp ne i8 %b, 0
%zext = zext i1 %cmp to i8
%and = and i8 %zext, %a
%xor = xor i8 %and, 1
ret i8 %xor
}

The 2nd function is the better IR (one less logic op), so you need to add a transform to produce that and get rid of the existing limitation. Any InstCombine folds that rely on bypassing an earlier fold are wrong IMO.

Hi Sanjay,

thank you a lot for your answer. I understand that in your examples it is desirable that `foo` and `goo` are canonicalized to the same IR, i.e., something like `@goo`. However, I still have a few open questions, but please correct me in case I'm thinking in the wrong direction.

I've been fixing similar problems recently (eg, 28476 – [InstCombine] functionally-equivalent comparison code produces different IR ). I think your example (the test in zext-or-icmp.ll) is another missed application of DeMorgan's Law:

char foo(char a, char b) {
  return (((a & 1) == 0) | (b == 0));
}

char goo(char a, char b) {
  return !(((a & 1) != 0) & (b != 0));
}

Make sure I didn't botch that translation: the 1st is the C equivalent of zext-or-icmp.ll, and the 2nd is what we really want it to be optimized to instead of what's shown in that test case.

Just to make sure that I understand this correctly: your goal here would be to add a transformation that changes `foo` and zext-or-icmp.ll to a form that `mathDeMorgan()` can recognize and transform to `goo` to eventually get it to the "optimal" form in `@goo`:

Therefore, in a first step, I examined the IR that clang generates for `foo` and `goo` just before they are passed to InstCombine:

define signext i8 @foo_before_InstCombine(i8 signext %a, i8 signext %b) local_unnamed_addr #0 {
entry:
  %conv = sext i8 %a to i32
  %and = and i32 %conv, 1
  %cmp = icmp eq i32 %and, 0
  %conv1 = zext i1 %cmp to i32
  %conv2 = sext i8 %b to i32
  %cmp3 = icmp eq i32 %conv2, 0
  %conv4 = zext i1 %cmp3 to i32
  %or = or i32 %conv1, %conv4
  %conv5 = trunc i32 %or to i8
  ret i8 %conv5
}

; Function Attrs: nounwind ssp uwtable
define signext i8 @goo_before_InstCombine(i8 signext %a, i8 signext %b) local_unnamed_addr #0 {
entry:
  %conv = sext i8 %a to i32
  %and = and i32 %conv, 1
  %cmp = icmp ne i32 %and, 0
  %conv1 = zext i1 %cmp to i32
  %conv2 = sext i8 %b to i32
  %cmp3 = icmp ne i32 %conv2, 0
  %conv4 = zext i1 %cmp3 to i32
  %and5 = and i32 %conv1, %conv4
  %tobool = icmp ne i32 %and5, 0
  %lnot = xor i1 %tobool, true
  %lnot.ext = zext i1 %lnot to i32
  %conv6 = trunc i32 %lnot.ext to i8
  ret i8 %conv6
}

For both functions, the `icmp` operations will be immediately followed by `zext` instructions, which will directly be optimized away by `transformZExtICmp()`, which is the reason why in the end we will only have one of the `icmp` instructions left. However, that means that `foo` will not be lowered to the IR that we have in zext-or-icmp.ll, where the `zext` is placed after the `or` instruction as opposed to `@foo_before_InstCombine`:

define i8 @zext_or_icmp_icmp(i8 %a, i8 %b) {
  %mask = and i8 %a, 1
  %toBool1 = icmp eq i8 %mask, 0
  %toBool2 = icmp eq i8 %b, 0
  %bothCond = or i1 %toBool1, %toBool2
  %zext = zext i1 %bothCond to i8
  ret i8 %zext
}

That means as long as the `zext` in zext-or-icmp.ll isn't pushed in front of the `icmp` operations it will not be possible to remove one of them via `transformZExtICmp()` to get to something like `@goo`. So if I'm not mistaken we would still be in need of this unfolding for `zext-or-icmp.ll` if we want it to be optimized equivalently to `foo`, which we would also have to take into account when adapting it for DeMorgan.

In general, and independently from my concerns above, I would think that when we want to get from `(((a & 1) == 0) | (b == 0))` to `!(((a & 1) != 0) & (b != 0))` we would need to transform it to `(!((a & 1) != 0) | !(b != 0))` in order to make it amenable to DeMorgan. I also tried to adapt `foo` to this form, however, the problem is that InstCombine recognizes `!(... != 0)` as redundant and will simplify it back to `(... == 0)`. So for our case, that means that `(!((a & 1) != 0) | !(b != 0))` is immediately transformed back to `(((a & 1) == 0) | (b == 0))` before `matchDeMorgan()` sees it.

I also wanted to let you know, that in the meantime I also prepared another patch that tries to solve the mentioned problems with the `zext` unfolding (still without taking DeMorgan into account): ⚙ D22864 [InstCombine] Refactor optimization of zext(or(icmp, icmp)) to enable more aggressive cast-folding. It might not yet be the optimal way in this case but it enables us to to reduce the existing limitations when folding `zext` operations. But in case I misinterpreted your suggestions I'm surely interested in taking another approach.

Sorry if I misunderstood you or if I overcomplicated the matter, however, I'm looking forward to your thoughts on this. And also in case there's anything unclear or if I forgot to mention something that would be important don't hesitate to demand a more detailed explanation from my side.

Best,
Matthias

Hi Matthias -

Sorry for the delayed reply. I think you’re on the right path with D22864. If I’m understanding it correctly, my foo() example and zext_or_icmp_icmp() will be equivalent after your patch is added to InstCombine.

If that’s right, then you will be able to add a pattern match in matchDeMorgansLaws() for that exact sequence of instructions where we only have one icmp that is zexted. As you noted, we can’t add the ‘not’ ops back in because there’s another fold that removes them.

It may be helpful to look at the chunk of code that I removed from matchDeMorgansLaws() in https://reviews.llvm.org/D22271 ; it’s not what we need to solve this case, but that’s the general form of the pattern matching that we probably want.

We may need to answer this question first though:
define i8 @notLowBit(i8 %a) {
%lowBit = and i8 %a, 1
%not = xor i8 %lowBit, 1
ret i8 %not
}

Should this be canonicalized to:
define i8 @notLowBit(i8 %a) {
%not = xor i8 %a, -1
%lowBit = and i8 %not, 1
ret i8 %lowBit
}

…because xor with -1 is the canonical ‘not’ operation?

Hi Sanjay,

Hi Matthias -

Sorry for the delayed reply. I think you're on the right path with D22864.

No problem, thank you for your answer!

If I'm understanding it correctly, my foo() example and zext_or_icmp_icmp() will be equivalent after your patch is added to InstCombine.

Yes, that’s correct.

If that's right, then you will be able to add a pattern match in matchDeMorgansLaws() for that exact sequence of instructions where we only have one icmp that is zexted. As you noted, we can't add the 'not' ops back in because there's another fold that removes them.

It may be helpful to look at the chunk of code that I removed from matchDeMorgansLaws() in ⚙ D22271 [InstCombine] reverse canonicalization of xor(zext i1 A), 1 <---> zext(not i1 A, true) (PR28476) ; it's not what we need to solve this case, but that's the general form of the pattern matching that we probably want.

Ok, I understand, thank you for the clarification. I’ve therefore tried to add the following pattern matches to matchDeMorgansLaws():

(A ^ 1) | zext(B == 0) -> (A & zext(B != 0)) ^ 1
(A ^ 1) & zext(B == 0) -> (A | zext(B != 0)) ^ 1

where `A` is an arbitrary expression that at most uses its lowest bit (like the `and i8 %a, 1` in the `@foo` below).

When testing this on ...

define i8 @foo(i8 %a, i8 %b) {
  %and = and i8 %a, 1
  %xor = xor i8 %and, 1
  %cmp = icmp eq i8 %b, 0
  %zext = zext i1 %cmp to i8
  %or = or i8 %xor, %zext
  ret i8 %or
}

… and also on ...

define i8 @zext_or_icmp_icmp(i8 %a, i8 %b) {
  %mask = and i8 %a, 1
  %toBool1 = icmp eq i8 %mask, 0
  %toBool2 = icmp eq i8 %b, 0
  %bothCond = or i1 %toBool1, %toBool2
  %zext = zext i1 %bothCond to i8
  ret i8 %zext
}

… InstCombine will now eventually get to:

define i8 @zext_or_icmp_icmp(i8 %a, i8 %b) {
  %1 = icmp ne i8 %b, 0
  %2 = zext i1 %1 to i8
  %zext.demorgan = and i8 %2, %a
  %zext = xor i8 %zext.demorgan, 1
  ret i8 %zext
}

just what we wanted :slight_smile:

The only problem that I am currently facing is symmetry: when we replace `%or = or i8 %xor, %zext ` by `%or = or i8 %zext, %xor` in the above `foo` the patterns obviously won’t trigger anymore. So to handle `(A ^ 1) | zext(B == 0)` and ` zext(B == 0) | (A ^ 1)` in a uniform way we may need another transform. Is it feasible, for example, to introduce a canonicalization that moves a cast always to the right-hand side of a binary operation?

We may need to answer this question first though:
define i8 @notLowBit(i8 %a) {
  %lowBit = and i8 %a, 1
  %not = xor i8 %lowBit, 1
  ret i8 %not
}

Should this be canonicalized to:
define i8 @notLowBit(i8 %a) {
  %not = xor i8 %a, -1
  %lowBit = and i8 %not, 1
  ret i8 %lowBit
}

...because xor with -1 is the canonical 'not' operation?

Do we need such a canonicalization in the presence of the above patterns?

Best,
Matthias

… InstCombine will now eventually get to:

define i8 @zext_or_icmp_icmp(i8 %a, i8 %b) {
  %1 = icmp ne i8 %b, 0
  %2 = zext i1 %1 to i8
  %zext.demorgan = and i8 %2, %a
  %zext = xor i8 %zext.demorgan, 1
  ret i8 %zext
}

just what we wanted :slight_smile:

Great!

The only problem that I am currently facing is symmetry: when we replace
`%or = or i8 %xor, %zext ` by `%or = or i8 %zext, %xor` in the above `foo`
the patterns obviously won’t trigger anymore. So to handle `(A ^ 1) |
zext(B == 0)` and ` zext(B == 0) | (A ^ 1)` in a uniform way we may need
another transform. Is it feasible, for example, to introduce a
canonicalization that moves a cast always to the right-hand side of a
binary operation?

This is a general problem for any pattern matching of commutative
operators. Search for "matchSelectFromAndOr" to see a particularly bad
example. :slight_smile:

In theory, we should be able to use "value complexity" in this situation,
but it's broken:
https://llvm.org/bugs/show_bug.cgi?id=28296

So either we need to fix that or we'll have to include extra patterns to
match the commuted possibilities.

> We may need to answer this question first though:
> define i8 @notLowBit(i8 %a) {
> %lowBit = and i8 %a, 1
> %not = xor i8 %lowBit, 1
> ret i8 %not
> }
>
> Should this be canonicalized to:
> define i8 @notLowBit(i8 %a) {
> %not = xor i8 %a, -1
> %lowBit = and i8 %not, 1
> ret i8 %lowBit
> }
>
> ...because xor with -1 is the canonical 'not' operation?

Do we need such a canonicalization in the presence of the above patterns?

This one is interesting. When I wrote the example, I didn't realize that we
actually have the inverse canonicalization: the 2nd case is always
transformed to the 1st because it reduces the demanded bits of the
constant. However, there is a FIXME comment about that in
SimplifyDemandedUseBits:

    // FIXME: for XOR, we prefer to force bits to 1 if they will make a -1.

So either we fix that and adjust the patterns that you are about to add
or...

Welcome to InstCombine. :slight_smile:

This is a general problem for any pattern matching of commutative operators. Search for "matchSelectFromAndOr" to see a particularly bad example. :slight_smile:

In theory, we should be able to use "value complexity" in this situation, but it's broken:
28296 – [InstCombine] unary operations are assigned the wrong complexity

So either we need to fix that or we'll have to include extra patterns to match the commuted possibilities.

Thank you for that pointer! I had a look a this `getComplexity` function and it would obviously be a cleaner solution to use this complexity based approach. I think it would really be worth to fix this bug, also in order to simplify this `matchSelectFromAndOr` :slight_smile: So I’ll first try to come up with a solution for that, if feasible. Otherwise, as a start, we might have to go with a manual matching of the commuted patterns.

> We may need to answer this question first though:
> define i8 @notLowBit(i8 %a) {
> %lowBit = and i8 %a, 1
> %not = xor i8 %lowBit, 1
> ret i8 %not
> }
>
> Should this be canonicalized to:
> define i8 @notLowBit(i8 %a) {
> %not = xor i8 %a, -1
> %lowBit = and i8 %not, 1
> ret i8 %lowBit
> }
>
> ...because xor with -1 is the canonical 'not' operation?

Do we need such a canonicalization in the presence of the above patterns?

This one is interesting. When I wrote the example, I didn't realize that we actually have the inverse canonicalization: the 2nd case is always transformed to the 1st because it reduces the demanded bits of the constant. However, there is a FIXME comment about that in SimplifyDemandedUseBits:

    // FIXME: for XOR, we prefer to force bits to 1 if they will make a -1.

So either we fix that and adjust the patterns that you are about to add or…

Yes, I also realized then that there exists this inverse transform. But for now, if I’m not mistaken, that doesn’t seem to be a problem because currently the new patterns don’t seem to be in need of such a transform.

Welcome to InstCombine. :slight_smile:

Thank you :slight_smile:

Best,
Matthias