Distribution doesn’t seem to be used by many transforms at present. My vague recollection is that the fast math flags didn’t do a great job of characterizing when it would be allowed, and using it aggressively broke a lot of code in practice (code which was numerical unstable already, but depended on getting the same unstable results), so people have been gun-shy about using it. Owen might remember more of the gory details.

Arguably, it is implicitly used when FMA formation is combined with fast-math, e.g.:

```
float foo\(float x, float y\) \{
return x\*\(y \+ 1\);
\}
```

Compiled with -mfma -ffast-math, this generates fma(x, y, x). Even though this transform superficially appears to use distributivity, that’s somewhat debatable because the fma computes the whole result without any intermediate rounding, so it’s pretty wishy-washy to say that it’s been used here.

It most definitely has been used here, because of inf/nan behavior.

inf*(0 + 1) == inf

inf*0 + inf == nan

(I actually fixed this bug in the past because it occurred in practice.)

Thanks Nicolai and Steve for the initial reply.

So if I understand correctly there are 2 places you can pinpoint at where distributivity is used:

- simplification of infinity/NaN expressions

- combination with FMA introduction

@Steve: You mentioned "fast-math flags characterizing when it would be allowed" so is there a point of reference where it is exactly specified what fast-math flags allow and what not beyond the llvm documentation that gives the high-level explanation?

Thanks again,

Heiko

--Resending my last mail, as it might have gotten lost --

Thanks Nicolai and Steve for the initial replies.

So if I understand correctly there are 2 places you can pinpoint at where distributivity is used:

- simplification of infinity/NaN expressions

- combination with FMA introduction

@Steve: You mentioned "fast-math flags characterizing when it would be allowed" so is there a point of reference where it is exactly specified what fast-math flags allow and what not beyond the llvm documentation that gives the high-level explanation?

Thanks again,

Heiko

--Resending my last mail, as it might have gotten lost --

Thanks Nicolai and Steve for the initial replies.

So if I understand correctly there are 2 places you can pinpoint at where distributivity is used:

- simplification of infinity/NaN expressions

- combination with FMA introduction

Well no, my comment also applied to the FMA introduction.

Stephen was a bit hesitant about what to call the x * (y + 1) --> x * y + x FMA-introducing transform on the grounds that it superficially only seems to improve the precision at which the expression is evaluated. My point was that this very same transform can introduce very significant, qualitative differences in the result when inf is involved.

Cheers,

Nicolai

Stephen was a bit hesitant about what to call the x * (y + 1) → x * y + x FMA-introducing transform on the grounds that it superficially only seems to improve the precision at which the expression is evaluated.

It’s a little bit more subtle than that; because FMA is computed without internal rounding, under an as-if model, you can’t differentiate between fma(x, y, x) and a hypothetical correctly-rounded x*(y + 1), so it doesn’t even make sense to talk about “distributivity” in this context …

My point was that this very same transform can introduce very significant, qualitative differences in the result when inf is involved.

… except with regard to inf/nan edge cases, as you correctly pointed out. =)

– Steve

Thank you once again for the further clarifications.

I still have one more question:

What is the canoncial source for getting a definitive answer on which optimizations are applied when, when allowing fastmath optimizations in LLVM?

A pointer to a source file would also be fine. It is just that I tried searching on http://releases.llvm.org/7.0.0/docs/Passes.html and did not find any information there, so I am feeling a bit lost.

Thank you,

Heiko