Question About <math.h> Speed/Accuracy Goals with llvm-libc

Hi all,

I was just wondering about what balance between speed/accuracy the
project is aiming to hit in functions such as `hypot(double, double)'.

In particular, I was looking at a recent algorithm published at
[1904.09481] An Improved Algorithm for hypot(a,b) which offers an accuracy/rounding
improvement over glibc's `__ieee754_hypot(double, double)' at the sake
of around a 5x performance penalty...

If anyone could shed some insight into this question, as vague or
trivial as it might be, I would be most grateful.

Cheers,
Sameed

My personal opinion is that a general-purpose libc should aim for accuracy over speed on functions such as this, because accuracy is far more generally useful.

It’s reasonably trivial to pick a point on the speed/accuracy balance that’s accurate enough for nearly all programs, just by choosing the most accurate implementation that’s not abysmally slow. This includes working for what is by far the most common case, which is “I have no idea what accuracy I need here so long as the overall program gets the right answer, and I don’t want to try to figure it out.” That’s also a kindness for users; errors from loss of floating-point precision in transcendental functions are fiercely painful to debug.

On the speed side, most uses of math.h functions aren’t in hot paths where getting the highest possible speed matters, and most of the uses that are in those paths probably shouldn’t be. Even aside from things like vectorization, most algorithms can trade off some amount of accuracy (whether in precision or in range of inputs), but what they can trade off is very algorithm-dependent. In those performance-critical spots, what’s needed is inherently going to be a specialized function, not a general-purpose one from a standard library. So, again, when a general-purpose function is actually appropriate, pushing the balance almost all the way to the “accuracy” side is still an acceptable choice.

The tricky bit comes with that “almost” in “almost all the way to the ‘accuracy’ side”; at some point, the cost/benefit tradeoff becomes very steep. I don’t know whether a 5x cost to the hypot(double,double) algorithm is worth it, given that glibc is already very accurate. However, both “hypot” and “double” are used specifically in cases where accuracy matters, so IMO the answer is probably yes.

  • Brooks

Hi all,

I was just wondering about what balance between speed/accuracy the
project is aiming to hit in functions such as `hypot(double, double)'.

I am not sure we can have a generalized answer here. As a personal first opinion, I will lean on the side of accuracy. After all, I would expect that functions like these give me the correct (as in, as accurate as possible) results. Brooks put it perfectly, " have no idea what accuracy I need here so long as the overall program gets the right answer." I would just trust that the libc is giving me correct results and will debug “my” code when something goes wrong. So, I want the libc to not betray that trust.

Now, there could be applications where a high precision result is not of importance, but the speed with which imperfect results are computed matters. Should a libc cater to such use cases? Personally, I don’t think so. But, if 90% of the use cases don’t need that accuracy which is being achieved at the cost of speed which is being a burden, then we might/should reprioritize.

In particular, I was looking at a recent algorithm published at
https://arxiv.org/abs/1904.09481 which offers an accuracy/rounding
improvement over glibc’s `__ieee754_hypot(double, double)’ at the sake
of around a 5x performance penalty…

If anyone could shed some insight into this question, as vague or
trivial as it might be, I would be most grateful.

In general, data should help us answer this. Take the case of the current memcpy implementation for example. It is optimized for sizes less than 128 bytes because it was found that in practice, 96% percent of the calls to memcpy are for sizes <= 128 bytes: https://github.com/llvm/llvm-project/blob/master/libc/benchmarks/README.md
For math functions, we should start with as accurate as possible implementations, and adjust according to the data presented.

Thanks,
Siva Chandra