[RFC] Floating-point accuracy control

I had a lot of helpful discussions about this topic at the LLVM Dev Meeting. Thanks to everyone who gave their input.

Tue Ly (@lntue) suggested that it would be helpful to have additional properties like rounding mode represented. That would allow us to select different implementations of functions that provide correctly rounded results. Other constraints like limited domain might also be helpful in the future. This leads me to want a more open-ended and extensible way of specifying constraints, which could possibly be combined with the existing constrained fp handling.

Johannes Doerfert (@jdoerfert) said he would like to get rid of the existing math intrinsics (llvm.sin, llvm.cos, etc.) since they almost entirely duplicate what is done with the LibFunc handling, with the minor difference being errno handling. He suggested I could get the behavior I want by attaching attributes to the regular function calls and the math intrinsics could be eliminated altogether.

I’m not entirely comfortable with that suggestion for several reasons. (1) A lot of existing optimizations would need to be updated to respect the attribute and future optimizations could break things if they didn’t know to look for the attributes. (2) I’m not comfortable with the idea of replacing one named function call with a completely different function call. It’s technically doable, and probably happens already in some case, but it feels to me like something that the IR isn’t specifically allowing. (3) I intend for my function accuracy handling to target various calls that are equivalent to the standard math library call but original as something different, such as the SYCL or CUDA builtin transcendental functions.

I have a new vision for how to bring this all together, and I think it’s pretty good. I’ll put together a new RFC providing more details and even a patch to go along with it, but for now I’d like to sketch out the basic idea here and ask for feedback.

First, I’d like to introduce a new set of math intrinsics, which I hope will eventually become the only math intrinsics. I want to give them a new name so they don’t inherit any unwanted behavior and the existing intrinsics can be phased out gradually. There will be two key characteristics of these intrinsics: (1) They will be tightly coupled with a new set of call site attributes that are valid only for these intrinsics and have defined default values that apply if the attribute is not present. (2) They will be defined explicitly as being intended to be replaced with alternate implementations that meet the requirements described by the associated attributes.

So for my accuracy use case, I’d imagine a call like this:

%0 = call double @llvm.fpbuiltin.cos(double %x) #0 
...
attribute #0 = "fp-max-error"="4.0"

I’d add a wrapper class that allows FPMathBuiltin to act like a first class instruction, and for any attribute we add it would have an accessor function like FPMathBuiltin::getRequiredAccuracy().

For the first phase of implementation, I imagine using this only for the accuracy case. Later, we could move over the constrained intrinsics by adding support for “fp-rounding-mode” and “fp-exception-behavior” attributes. Any other constraints we need later could be added without requiring a new set of intrinsics.

Once everyone is comfortable with the idea that you have to check the attributes before doing anything with these intrinsics, they could replace the existing math intrinsics.

I’d also like to tie this back to the correctly rounded math library implementations Tue Ly has created. As I understand it, some of these rely on the availability of non-default architecture features like FMA. The mechanism I’m proposing here could be used to select alternate math library implementations. I’m picturing something analogous to clang’s -fveclib. This could potentially have multiple implementations like __cos_cr_avx2, __cos_cr_sse, etc., not to mention vector versions like __cos_cr_f64x4_avx2.

1 Like