SIGFPE received when a program is compiled with Clang but not with GCC

Hello. I'm using Clang 3.7~svn251177-1~exp1 from the Apt repo on
Kubuntu Trusty 64 bit. I'm trying to backport Asymptote from Debian
Sid: Debian -- Details of package asymptote in sid

I find that the build fails with:

../asy -dir ../base -config "" -render=0 -f pdf -noprc filegraph.asy
../base/plain_Label.asy: 294.10: runtime: Floating point exception (core dumped)

When I investigated this I found that there was no problem when
compiling with GCC.

Please note the attached script which should demonstrate the error.
Running the script with -g will compile using the "default" compiler
(gcc) and does not produce any error.

I used KDbg to debug the situation i.e. run the executable asy with
the given arguments which produced the error during the build and
found that at pair.h, line 148 reads:

if(scale != 0.0) scale=1.0/scale;

but it is at this point that despite the if() check, during one
particular invocation to pair unit(const pair&), somehow the program
is trying to do the division and getting the error.

Note that for some reason during debugging I keep getting SIGPWR,
SIGXCPU etc – I don't know why this is but it is perhaps because asy
implements a virtual machine which does not support debugging or such?
Anyhow, repeatedly trying to run the executable with the given
arguments produces the error in the end.

asymptote-compilation-bug.sh (722 Bytes)

LLVM believes that floating point division will never trap, and speculates it to make code like this:
float tmp = 1.0 / scale;
scale = scale == 0.0 ? scale : tmp;

Normally, FP division produces NaN. The only way that I’m aware of to make FP div trap is to use fenv.h, which isn’t supported:
https://llvm.org/bugs/show_bug.cgi?id=8100

s/NaN/infinity/, but that’s the gist of it.

– Steve

LLVM believes that floating point division will never trap, and speculates
it to make code like this:
float tmp = 1.0 / scale;
scale = scale == 0.0 ? scale : tmp;

I don't get it. In what way is the above code more efficient than:

if (scale != 0) scale = 1 / scale;

... that the compiler should replace this with that?

Normally, FP division produces NaN. The only way that I'm aware of to make
FP div trap is to use fenv.h, which isn't supported:
https://llvm.org/bugs/show_bug.cgi?id=8100

But the code above is perfectly legal, logical and safe C/C++ code,
and that Clang doesn't support it (in the sense of emitting code that
causes a SIGFPE when one isn't warranted) *is* a bug, no?

And it doesn't seem to be per se a dup of the bug you mention above
(in which case I should report it separately) or is it?

> LLVM believes that floating point division will never trap, and
speculates
> it to make code like this:
> float tmp = 1.0 / scale;
> scale = scale == 0.0 ? scale : tmp;

I don't get it. In what way is the above code more efficient than:

if (scale != 0) scale = 1 / scale;

... that the compiler should replace this with that?

> Normally, FP division produces NaN. The only way that I'm aware of to
make
> FP div trap is to use fenv.h, which isn't supported:
> https://llvm.org/bugs/show_bug.cgi?id=8100

But the code above is perfectly legal, logical and safe C/C++ code,
and that Clang doesn't support it (in the sense of emitting code that
causes a SIGFPE when one isn't warranted) *is* a bug, no?

We support the code you listed, but we don't support configuring your
processor to make FP divide by zero trap.

I'm not making a value judgement on what LLVM does here, I'm just relaying
the facts.

And it doesn't seem to be per se a dup of the bug you mention above
(in which case I should report it separately) or is it?

This bug has been reported enough times that it's probably worth having one
just for speculation of FP divide. There might already be one there in the
list of dupes that we could split off.

I filed https://llvm.org/bugs/show_bug.cgi?id=25572

http://reviews.llvm.org/D14079 is a pending change which is intended to allow this to be fixed.