[FP] Constant folding of floating-point operations

But all results are approximate, so this isn’t particular clear what it means. The best you can hope for is correctly rounded, which is still an approximation and provided by very few real libm implementations for any of the difficult functions.

There certainly is no requirement. LLVM and C, C++ barely have defined floating point requirements, much less values for specific functions. OpenCL at least gives ulp tolerances (which also do not need to be consistent)

I don’t see this as an issue. There’s a broader question of what constitutes the “result” of the operation. In the presence of any kind of folding combine (e.g. any use of the contract or reassoc flag), the apparent result can change depending on the use context beyond constant folding

Kind of the same thing, and I do think this is a problem. It’s been a long standing aspirational issue to have consistent constant folding (which then doesn’t necessarily match the target library behavior), just internally consistent in the compiler. Ideally we would just use MPFR or equivalent for this.

This is definitely fine. The point is produce whatever is fastest, and constant fold as fma is perfectly fast (and also the better result). “If the target supports them” is meaningless. The compiler could always choose to implement fma in software to implement fmuladd. This is defined operationally and should not differ based on target codegen preference. If the target had some pipelining issue it could resolve by selectively emitting separate fmul+add or an fma, that should also be fine. If you want a specific behavior, you can choose to not use fmuladd.

Practically speaking this proposal is “do no constant folding” which will not go over well. I would love if we had perfect host independent constant folding but that’s a lot of work nobody has seriously thought about undertaking