# BranchProbability/BlockFrequency for a chain of ifs

I’m look at some code that does something roughly like this

if (a >= 10 && a < 20) {
// do calculation
} else if (a >= 30 && a < 40) {
// do calculation
} else if (a >= 50 && a < 60) {
// do something

} else if (a >= 70 && a < 80) {
// do something

}

// some other code that got split based on whether we entered any of the ‘if’ bodies. So each if body merges to a slightly different spot than the not taken side of the final if.

In my specific case there are really 8 specific range checks.

Without profile data it seems branch probability assumes the branch of each ‘if’ has 50% probability. Due to the way the ifs are cascaded this causes block frequency to think the final ‘if’ body can only execute some really small percentage of the time. Similarly it thinks that less than %1 of the time will we enter the code path that occurs if none of the ifs are taken.

In reality in my specific case we end up executing none of the ifs most often. But I think i’d settle for all bodies and the no ifs taken path having equal weighting

The bad block frequency and the fact that we have a split path for merging the ifs versus not going into any ifs at all seems to be causing block placement to create a layout that pessimizes the real common case.

Since all the compares are on the same variable, the originally code is kinda like a switch and as such we should maybe give each possible path equal weighting.

Is there somewhere we can apply a heuristic to the branch probability so that it can provide a better answer here?

While it is possible that all the value ranges are equally likely, the opposite situation may be true too. It is reasonable to assume that if the developer knows the program behavior, he/she will choose to test the most likely range first .This can be used by the compiler as a heuristic to make it more likely (not that LLVM is doing it today). The 50% BP is a sign that there is no known heuristic applied.

For better performance, using PGO or sample PGO is the best possible way. Another direction is to use source annotation. Unfortunately, builtin_expect is too strong for many cases. One enhancement that can be done is to teach this builtin to take an additional likelihood parameter. Restructuring the code is also possible, e.g. by compressing the range and using a switch …

David

This came from a benchmark so I can’t change the code. The original source code doesn’t even have the 'else’s. The jump threading pass is determining the ranges are mutex.

A solution might be just transforming cascading if’s into something like switch which has branch to multiple destinations and equally weighs each jmp. However, we know switch doesn’t support range for jump table, but there are ways around that.

-Kevin

Slightly OT for the original question, but do we manage to factor out the common factor here and actually form the switch?

This could be rewritten as:

int tmp = a/10;
switch(tmp) {
case 1: …
case 3: …
case 5: …
case 7: …
}

Do we do so?

Philip

In my original case we we definitely didn’t. I sort of made up the numbers in the example.

I think in the original code the first range was from 390-450. The second range was from 840-900. Then 1290-1350, etc. There is a multiply by 60 proceeding the compares. But as you can see not all of the ranges start at a multiple of 60. Though they are 60 entries wide.

Kyle has been looking at these exact kinds of patterns and trying to work out a better model to use that avoids deeply chained things becoming “cold” in weird ways…