# [RFC] Value Range Based Optimization Opportunity in LLVM

Hi All,

We have recently found some optimization opportunities created by replicating code into branches in order to enable optimization. In general, the optimization opportunity we are pursuing is like the following.

Given pseudo-code:

// block A
if (some condition)
// block B
// block C

If it can be efficiently proven that some portion of block C can be simplified had control flow not gone through the if statement, it might be useful to convert this CFG into a diamond and hoist that portion of block C into both block B and the new block.

Consider the following example:

int test(int *Ptr, int a, int b, int c, int d) {
int Ret = 0;
if (__builtin_expect(!Ptr, 0)) {
Ret = calli(a);
// return Ret / (a|1) / (b|1) / (c|1) / (d|1); // Copy return to here
}
return Ret / (a|1) / (b|1) / (c|1) / (d|1); // This can be simplified to return 0
}

In this case, replicating the return statement in the branch allows the optimizer to prove the value of Ret at the end of the function is 0 and eliminate the arithmetic calculations.

A second example:

unsigned long funcReturningArbitraryi64(unsigned char *p);
#define LENGTH(uv) ( (uv) < 0x80 ? 1 :
(uv) < 0x800 ? 2 :
(uv) < 0x10000 ? 3 :
(uv) < 0x200000 ? 4 :
(uv) < 0x4000000 ? 5 :
(uv) < 0x80000000 ? 6 : 7 )

int func(unsigned char *p, bool flag)
{
unsigned long c = *p;
int len;
// …
#ifdef _ORIG
if(flag) {
// …
c = funcReturningArbitraryi64(p);
}
len = LENGTH(c);
#else
if(flag) {
// …
c = funcReturningArbitraryi64(p);
len = LENGTH(c);
} else {
len = LENGTH(c);
}
#endif

// …

return len;
}

In this case, we see that creating an else block and replicating the return statement in both the if and else branch (like the code snippet guarded by the #else) enables the macro UNISKIP in the else branch to be optimized.

Most of the examples we have come up with so far are centered around value ranges along the conditional branches. When the range of values a symbol can have along different branches is provably different, opportunities for optimization may arise. However, this likely isn’t the only category of optimizations that could benefit from this.

Is there an existing transformation in LLVM that should be doing this already that is missing this opportunity? If not, we would like to pursue adding this. Of course, this optimization would be subject to a cost model as it may result in code growth. For example, it may not be advantageous to do this if the simplified branch is cold. If anyone has any comments/suggestions we are very much interested in hearing them.

Regards,

Tony Jiang, M.Sc.
LLVM PPC Backend Development
IBM Toronto Lab, C2/712/8200/MKM
8200 Warden Ave, Markham, L6G 1C7
Email: jtony@ca.ibm.com
Phone: 905-413-3676

We have two transformations that track ranges along CFG edges using LazyValueInfo: JumpThreading and CorrelatedValuePropagation. I don’t think that either does what you’re proposing. Â -Hal

NewGVN does some of it.
It will discover if backpropagation through a phi enables an operation to be put in terms of existing expressions or constants.
It does not track ranges though, only explicit values.

In your ORIG block, if LENGTH can be expressed as a merge of already-computed expressions or constants in the program, it will discover it.

You can stare at examples of this (and compute quite complicated ones if you wish) at test/Transforms/NewGVN/completeness.ll

Both your examples already have a phi in it in llvm ir, so that is just a matter of the proper range propagation.
It is really:

int test()
{
int Ret0 = 0;
if (!Ptr)
Ret1= calli(a)
RetPhi = PHI(Ret0, Ret1)
return RetPhi;

}

It should already be possible to determine the value of Ret0 to be 0 based on ranges without duplicating anything, and the extra return doesn’t give you anything.

If you want to do it based on ranges: a combination of a value range computation using the existing sparse propagation engine, and a transform at phi nodes like we use for NewGVN, should be able to handle what you want here. The existing lazy value info may not get an optimal answer for various reasons (it is trying to propagate in the wrong direction, so it loses some context)

You could also use the same basic infrastructure that predicateinfo uses, and rename info at points the ranges could have changed to get additional precision.
(unlike predicateinfo, which renames when there are branches).

There are quite a number of papers on making SSA forms that are effective at range propagation. PredicateInfo is building pruned e-ssa, which is the basis of a number of range analysis (see http://homepages.dcc.ufmg.br/~fernando/publications/papers/CGO13_raphael.pdf and the slides at http://laure.gonnord.org/pro/research/ER03_2015/RangeAnalysis.pdf).

The only case you should ever need to duplicate code is when the expression is something not already computed in the program. That’s not the case for your first example, it can be done without code duplication. For your second example, it would require insertion, but you could know ahead of time which computations you actually need to insert.

For example, for your first example, the IR is something like this:

; Function Attrs: nounwind ssp uwtable
%6 = icmp eq i32* %0, null
br i1 %6, label %7, label %9, !prof !2

; :7: ; preds = %5
%8 = tail call i32 (i32, …) bitcast (i32 (…)* @calli to i32 (i32, …)*)(i32 %1) #2
br label %9

; :9: ; preds = %7, %5
%10 = phi i32 [ 0, %5 ], [ %8, %7 ]
%11 = or i32 %1, 1
%12 = sdiv i32 %10, %11
%13 = or i32 %2, 1
%14 = sdiv i32 %12, %13
%15 = or i32 %3, 1
%16 = sdiv i32 %14, %15
%17 = or i32 %4, 1
%18 = sdiv i32 %16, %17
ret i32 %18
}

NewGVN actually understands that, for example, %12 == phi(sdiv %8, %11, 0)[1] it just doesn’t keep it because it doesn’t try to prove that it’s free if the rest go away. I could make it do that (which would not require real code duplication, just code movement).

[1]

Processing instruction %12 = sdiv i32 %10, %11
Simplified = sdiv i32 0, %11 to constant i32 0
Found phi of ops operand i32 0 in %5

Cannot find phi of ops operand for = sdiv i32 %8, %11 in block %7

Hi Daniel,

Thank you very much for you valuable feedback. We are looking into the range analysis you mentioned. Once we have a better understand about it, we can discuss more in the follow-up emails.

Regards,

Tony Jiang, M.Sc.
LLVM PPC Backend Development
IBM Toronto Lab, C2/712/8200/MKM
8200 Warden Ave, Markham, L6G 1C7
Email: jtony@ca.ibm.com
Phone: 905-413-3676

Hi Tony,

I am currently working on a polyhedral value analysis that I first
sketched here [0]. It is parametric and supports piece-wise defined
values, e.g. the abstract value for Ret in the first example would look
something like this:
Ret = ( 0 if __builtin_expect(!Ptr, 0) == 0)
or (calli(a) if __builtin_expect(!Ptr, 0) != 0)

Similarly for the second example the abstract value of len would depend
on flag and p for some cases and on flag and funcReturningArbitraryi64
for others.

Given such a piece-wise abstract value it is easy to find the conditions
under which they are constant or simple (for some definition of simple).

You can also evaluate expressions under certain constrains, e.g., in a
certain basic block or iteration of a loop. If you want you can even
derive IR from the conditions or the value under some condition.

I will present the analysis (and a subsequent memory analysis) in the
SRC at the US LLVMDev meeting next month. If you are interested we can
also look into using it to build the transformation you are describing.

Cheers,
Johannes

[0] http://llvm.org/devmtg/2017-02-04/ // The llvm website was down when I
wrote this mail though.