Demikhovsky, Elena wrote:
> Even then, I'd personally want to see further evidence of why the
correct solution is to model the floating point IV in SCEV rather than
find a more powerful way of converting the IV to an integer that models
> the non-integer values taken on by the IV. As an example, if the use
case is the following code with appropriate flags to relax IEEE
semantics so this looks like normal algabra etc:
> for (float f = 0.01f; f < 1.0f; f += 0.01f) ç **A**
> I'd rather see us cleverly turn it into:
> float f = 0.01f;
> for (int i = 1; i < 100; i += 1, f += 0.01f) ç **B**
I can later try to enhance IndVarSimplify::handleFloatingPointIV() in
order to convert**A** to **B**.
But **B** is exactly the case I’m starting from. The main IV “i” is
integer. The variable “f” is also considered as IV in this loop.
And this loop is not vectorized because “f” is floating point.
I don’t think that the case **B** is uncommon.
If B is the case we actually care about, I'd say changing SCEV to work with floating points is an overkill. How would you expect an SCEVFAddExpr to help vectorize B, other than tell you what the initial value and the increment is (and these can be found with a simple value analysis)?
If we're interested in handling complex variants of A directly: computing trip counts, proving away predicates etc. without translating the loops to use integer IVs (perhaps because we can't legally do so), then I can see FP-SCEV as a reasonable implementation strategy, but it looks like the general consensus is that such cases are rare and generally not worth optimizing?