[IndVars] Rewriting exit value of SCEV add expressions

Hi,

I found issues with INDVARS <-> LSR passes interactions after https://reviews.llvm.org/rL346397.

There were two main changes in this commit:

  1. Previously we were propagating SCEV add expressions even if DefInst of exit value has hard uses inside the loop. After rL346397 we forbid propagation of SCEV add expressions if DefInst of exit value has hard use inside the loop.

  2. Previously when checking for hard uses, we were only looking for immediate uses of DefInst. After rL346397 we are going down through the Def-Use chain.

As I understand the motivation for this was that if Def instruction of the exit value will stay in the loop due to hard use (i.e. it can’t be easily eliminated), there is no benefit in rewriting exit value.

But I think it is profitable in some cases, because it helps reducing the loop strength. And in that sense we regressed here, because before rL346397 we would rewrite the exit value of SCEV add expression even if there is hard use inside the loop.

Here is the reproducer:

target datalayout = “e-m:e-i64:64-f80:128-n8:16:32:64-S128”

target triple = “x86_64-unknown-linux-gnu”

@c = external dso_local local_unnamed_addr global i32, align 4

@a = external dso_local local_unnamed_addr global i32*, align 8

@b = external dso_local local_unnamed_addr global i32, align 4

@d = external dso_local local_unnamed_addr global i32, align 4

define i32 @foo(){

entry:

%0 = load i32*, i32** @a, align 8

%.pre = load i32, i32* @c, align 4

%.pre2 = load i32, i32* @d, align 4

br label %do.body

do.body: ; preds = %do.body, %entry

%1 = phi i32 [ %dec, %do.body ], [ %.pre2, %entry ]

%2 = phi i32 [ %inc, %do.body ], [ %.pre, %entry ]

%inc = add nsw i32 %2, 1

store i32 %inc, i32* @c, align 4

%3 = load i32, i32* %0, align 4

store i32 %3, i32* @b, align 4

%dec = add nsw i32 %1, -1

store i32 %dec, i32* @d, align 4

%tobool = icmp eq i32 %dec, 0

br i1 %tobool, label %do.end, label %do.body

do.end: ; preds = %do.body

%.lcssa = phi i32 [ %2, %do.body ]

%inc1 = add nsw i32 %.lcssa, 2

store i32 %inc1, i32* @c, align 4

ret i32 undef

}

Run this before and after rL346397:

$ opt a.ll -indvars -loop-reduce -S -o b.ll

After rL346397 we have additional ADD instruction inside the loop which is a performance regression.

I think after rL346397 LSR is not able to rewrite all the uses of %2 because phi node (%.lcssa) was not rewritten by indvars pass.

Simple fix would be just returning back original check for SCEVType in lib/Transforms/Scalar/IndVarSimplify.cpp:

  • if (!isa(ExitValue) && hasHardUserWithinLoop(L, Inst))
  • if ((ExitValue->getSCEVType() >= scMulExpr) && hasHardUserWithinLoop(L, Inst))

Note, there is another bug opened for the mentioned patch: https://bugs.llvm.org/show_bug.cgi?id=39673, but it only fixes the problem when the SCEV expression type is constant.

I’m not opening a new bug yet, but rather want to hear your comments.

Best regards,
Denis Bakhvalov.

In general, exit value rewriting isn’t really handled in a rigorous way; we have hasHardUserWithinLoop like you noted, and there have been some proposed changes to isHighCostExpansion recently. But we don’t try to weigh the overall cost of various expansions versus the original code anywhere; maybe something to look at improving in the future.

It’s probably not worth trying to save one add instruction if that blocks other loop optimizations. If the expansion is more expensive, it gets more complicated; we probably don’t want to be generating a bunch of code outside a loop if we’re not sure it actually saves code inside the loop. Not sure at first glance what cases your patch would trigger on.

-Eli

All: There will be a BoF talk at the EuroLLVM conference regarding Numerics (FMF and module flags which control fp behavior and optimization).

Even if you are not going to be in attendance, please reply to this thread as we are collecting open issues and ideas for future direction in all layers of LLVM for which optimizations are controlled by numerics flags.
Please read over the numerics blog if you like for reference material:

http://blog.llvm.org/2019/03/llvm-numerics-blog.html

Regards,
Michael

Hey Michael,

Thank you for working on this!

I’d like to touch on a topic mentioned in the blog post. The constrained intrinsics work is at a road block on how to proceed with the constrained implementation in the backends, i.e. D55506. Reviews/ideas in this area would be greatly appreciated (attn: target code owners).

Thanks,
Cameron

Hi Micheal,

Thanks for the blog post. Just like to point out few things that I thought is related to FP Numerics.
LLVM could do some additional transformation with “sqrt” and “division” under fast math on X86 like 1/sqrt(x)* 1/sqrt(x) to 1/x. These are long latency instructions and could get benefit if enabled under unsafe math.

Also are we considering doing such FP transforms on vector floating point types?

regards,
Venkat.

  1. Do you have a larger code example that shows the missed sqrt/div optimization?

We optimize the example you provided already in IR:
#include <math.h>
float sqrt_squared(float x) {
return 1.0f/sqrtf(x) * 1.0f/sqrtf(x);
}

$ clang -O1 -ffast-math -emit-llvm sqrt.c -S -o -
define float @sqrt_squared(float) local_unnamed_addr #0 {
%2 = fdiv fast float 1.000000e+00, %0
ret float %2
}

  1. Yes, transforms like the above should work with vector types. If not, please file a bug.

[adding back the mailing list]

Thanks for the example!

define double @_Z12sqrt_squaredd(double) {
%2 = tail call fast double @llvm.sqrt.f64(double %0)
%3 = fdiv fast double 1.000000e+00, %2
%4 = fmul fast double %3, %3
ret double %4
}

So at first look, this has nothing to do with sqrt specifically. We are missing a basic factorization / re-association for fmul and fdiv:

// (L1 / L2) * (R1 / R2) → (L1 * R1) / (L2 * R2)

If we do that, then existing transforms should kick in to reduce the sqrt example. I’ll try to fix this in instcombine soon. If you have more complicated examples where we miss this, that would suggest that we need to make an enhancement to the “reassociate” pass too.

Hi Sanjay,

Thanks, yes I was also thinking of looking at re association pass to see if it can catch few more cases.

regards,
Venkat.

On 2nd look, this really is a sqrt-specific optimization:
https://reviews.llvm.org/rL357943

I think we still want the more general fmul+fdiv canonicalization, but that transform has some risk of exposing different missing optimizations, so I’ll wait a bit before trying that.