Why does LLVM not perform range analysis on integer values?

I previously asked this question on StackOverflow, but I thought it might be better to ask here.

From what I have observed, LLVM does not appear to be performing optimization based on range analysis. Or if there is a pass for this, it is not being applied when running rustc -C opt-level=3 or clang -O3. On the other hand, GCC has reliably performed this optimization in all of the tests I have done using Godbolt.

For example, take the following function (Godbolt). I would expect LLVM to optimize this function to a single ret instruction.

void foo(uint64_t a, uint64_t b, uint64_t c) {
    // Some pre-condition that puts limits on a and b
    if (a > 2048 || b > 1024) {
        // Using __builtin_unreachable() should have the same effect

    // Due to the previous condition, we know that a + b can not overflow
    if (a + b >= c) {

    if (a >= c) {
        // unreachable; If a >= c, then a + b >= c must also be true.

Going into this, I was not expecting it to be able to perform complex range analysis within loops. However, I expected that this function would be fairly trivial for the compiler to optimize away. If someone had asked me, I would have been confident that LLVM would be able to optimize away this function. It seems strange that it does not perform such a simple analysis within the scope of a single basic block. Not having this seems particularly problematic for languages like Rust that rely on LLVM to optimize away unnecessary safety checks for release builds.

With all of the work that has gone into LLVM, it seems highly unlikely no one has attempted to implement this sort of optimization. Was this implemented, but then disabled due to issues or is there some other problem I am unaware of?

Edit: After some experimentation in Godbolt, none of the available x86-64 clang versions (x86-64 clang 3.0.0 through x86-64 clang 17.0.1) were able to perform this optimization. On the hand, GCC was able to perform this optimization on x86-64 GCC 12.1+.

LLVM does have integer ranges, see lib/IR/ConstantRange.cpp. However, the information that it gathers isn’t relational: it only tracks concrete ranges for individual variables. Optimizing your program wants a relational analysis that keeps track of relationships between variables.

For example, this variant of your function doesn’t require a relational analysis and LLVM gets it:

LLVM does have range analysis in some level. You can take a look at Lazy Value Info. It is just not so powerful.

This general class of optimization is performed by ConstraintElimination.

I’m not sure why it doesn’t work here (ignoring the fact that your silly example flattens all the control flow). Here’s the debug log: Compiler Explorer

This constraint system gets constructed:

%a <= 2048
%b <= 1024
%a + %b + -1 * %c <= -1
%a + -1 * %c <= -1

I think the issue may be that we don’t add explicit uge 0 predicates for the unsigned constraint system?

cc @fhahn


Sorry about that, I think I may have abstracted away too many details when creating the MRE. The original use case which lead me to this was attempting to place b items (and a header) into an array of length c starting at an arbitrary index a. In the example, the if-statement I hoped it would optimize away represents a bounds check performed as part of indexing into the array. The check is implemented by the standard library and inlined into the calling function.

Yep I think that’s the issue. Let me prepare a patch and check the impact on compile time.

That’s exactly what ConstraintElimination pass should handle. I’ll share a patch to handle your example case (probably ready by tomorrow). If we fail to remove such checks after that, please let us know (ideally via an issue at Issues · llvm/llvm-project · GitHub)