I produce a mathematical modelling library on several platforms, including iOS, macOS and Android, all of which use Clang. Where the hardware can generate floating-point traps, I prefer to run testing with traps for Invalid Operation, Divide-by-Zero and Overflow turned on, since that finds me problem sites more quickly than working backwards from “this test case failed.”
However, I had a problem with Apple Clang 8.x, which I believe was LLVM 3.9, targeting x86-64, in that the optimiser was assuming that floating-point traps were turned off. This was shown, for example, by the way it hoisted floating-point divides above tests of the divisor that were meant to safeguard the divides.
After a long support case with Apple, they gave me some Clang command-line options for LLVM that suppressed the problem:
I appreciate that this costs some performance, and I can accept that. These options worked fine for Apple Clang 9.x, whose various versions seem to have been based on LLVM 4.x and 5.x.
Now I’ve come to Apple Clang 10.0, which seems to be based on LLVM 6.0.1, and I have lots of floating-point traps again in optimised x86-64 code. It seems possible that I need some further LLVM options: does this seem plausible?
I’m not familiar with the LLVM codebase, and while I can find the source files that list the options I can use with -mllvm, I’d be guessing at which options are worth trying. Can anyone make suggestions?
Thanks very much,