Hello,
I’ve tested LLVM’s ScalarEvolution analysis with different small examples. Here’s a result that surprised me:
void dont_optimize_away(int);
void foo(int n) {
int res = 0;
for (int i = 0; i < n; ++i) {
// The kind of things that SCEV can recognize is quite astounding…
// It wins over gcc for this one.
//res = res + i*i;
// Also, the kind of things it can’t recognize…
res = res * i;
}
dont_optimize_away(res);
}
// Compile using
$ clang -Wall -std=c99 -O2 -c clang_scev_surprise.c -S -o - -emit-llvm
In this loop, the value res is always zero, yet LLVM does not optimize the loop away at -O2 (contrary to GCC). Is there a simple explanation for this?
Is this a case that is general enough that we could include it in LLVM? It looks like something that ScalarEvolution could recognize. Or maybe we could constant-fold the particular pattern of having res = phi(0, res) as operand in a multiplication.
Cheers,
Jonas
clang_scev_surprise.c (339 Bytes)