Reasoning about results of min and max with a constant

Hi all,

Say we have this IR:

%1 = icmp slt i16 %x, 0
%.x = select i1 %1, i16 0, i16 %x

This is the canonical form of what is effectively max(x, 0).

From what I can tell LLVM has no facilities to determine from this code that %.x >= 0, so (for example) an SExt on %.x will not be converted to a ZExt.

I'm interested in seeing what sorts of changes would be needed to recognize this pattern in existing facilities (computeKnownBits, etc.) in order to more broadly apply optimizations that use those facilities.

Any insight would be appreciated. Thanks!

Hi all,

Say we have this IR:

%1 = icmp slt i16 %x, 0
%.x = select i1 %1, i16 0, i16 %x

This is the canonical form of what is effectively max(x, 0).

From what I can tell LLVM has no facilities to determine from this code
that %.x >= 0, so (for example) an SExt on %.x will not be converted to a
ZExt.

I'm interested in seeing what sorts of changes would be needed to
recognize this pattern in existing facilities (computeKnownBits, etc.) in
order to more broadly apply optimizations that use those facilities.

computeKnownBitsFromOperator is where the logic would go. You'd
call matchSelectPattern with the select and if it gave back SPF_SMAX with
the RHS being zero, you'd know that it's a signed max with zero.

Hi all,

Say we have this IR:

%1 = icmp slt i16 %x, 0
%.x = select i1 %1, i16 0, i16 %x

This is the canonical form of what is effectively max(x, 0).

From what I can tell LLVM has no facilities to determine from this code that %.x >= 0, so (for example) an SExt on %.x will not be converted to a ZExt.

I'm interested in seeing what sorts of changes would be needed to recognize this pattern in existing facilities (computeKnownBits, etc.) in order to more broadly apply optimizations that use those facilities.

LVI has a couple of special cases around select idioms. This would be another reasonable one to add. This would give range analysis (used in JumpThreading and CVP) for this idiom. I thought we already had this one actually.

Looks like LVI actually does have cases for max and min; would it be better to allow ValueTracking to use range analysis instead?

- CL

Yes, I was proposing something like https://ghostbin.com/paste/r5uou

I actually meant using the range analysis provided by LVI in computeKnownBits, instead of matching select patterns again in computeKnownBits. For example, consider this code:

// num is unsigned
if (num < 4)
num = num & 4; // this can be proven to be 0

If we allowed computeKnownBits to work in conjunction with range analysis, we could optimize this case.

  • CL

I’m wondering if anyone would be willing to help me implement this.

  • CL