Currently, we have
llvm.minnum.fXX
llvm.maxnum.fXX
llvm.minimum.fXX
llvm.maximum.fXX
They has different behavior about NUM vs SNAN on different platforms/cases
1.1) It may fallback to fmin/fmax of libc.
NUM vs sNaN -> qNaN + Invalid Exception
1.2) ARM/ARM64
fminnm/fmaxnm instructions may be used.
The behavior is same with libc.
The same is for MIPSr6/PowerPC/LoongArch.
1.3) X86_64
NUM vs sNaN -> NUM is returned.
1.4) RISCV/Hexagon
fmin.s/d are used, so NUM vs sNaN -> NUM.
libc introduced fminimum_num/fmaximum_num, which fellows IEEE754-2019’s minimumNumber/maximumNumber.
So my proposal is
Introduce minimumnum/maximumnum intrinsics, which will benefits ports like RISC-V, and another ports like ARM64 can do it with fcanonical+fcanonical+minnum/maxnum.
Sync minnum/maxnum with libc.
Since minnum/maxnum may fallback to fmin(3)/fmax(3), we shouldn’t introduce surprise to users.
minimum/maximum may need more test to sync with libc’s fminimum(3)/fmaximum(3). It may have some problem some architectures.
For the messiness related to SNaN, see Questions about llvm.canonicalize - #14 by andykaylor … in particular, in non-strict mode, we don’t guarantee that floating-point arithmetic ops quiet SNaNs. So anything that tries to distinguish between SNaN and QNaN inputs is going to produce unpredictable results. I think this impacts the proposed “minnum”.
The 2019 definition is much more useful than the previous snan-behavior-is-backwards behavior.
I think we need to fix minnum/maxnum to actually follow the brain-dead IEEE 2008 behavior. Unlike every other case where you can ignore signaling vs. quiet, this case has always been problematic because the signaling behavior is inverted from quiet. The unpredictability would be driven by upstream value producers not guaranteeing earlier quieting, not the fmin/fmax itself. What we do now is on targets following the 2008 definition is to insert canonicalizes during lowering to effectively treat signaling nans as quiet, which doesn’t match the name.
If we don’t fix this, I think minnum/maxnum should be removed, or at least renamed. Most uses would be better served by fminimum_num/fmaximum_num anyway. The one sometimes-plus of minnum is the relaxed signed zero requirement
Hi @wzssyqa. Firstly, thanks for your work on this. I’m trying to figure out what needs to be done next on RISC-V (specifically in terms of changing our lowering for llvm.minnum/llvm.maxnum) but it’s a little hard to keep track of where things are at. Would you mind summarizing the current status, and if you’ve moved over any targets that (like RISC-V) were previously using 2019 semantics rather than 2008 for minnum/maxnum. If you happen to plan to look at this for RISC-V that would be useful to know as well!
Are those exposed to __builtin_* in the clang frontend? Do you mind comparing / testing against LLVM libc’s implementations at: implementation link testing link
If __builtin_* are available in the clang frontend, it will be fairly easy for us to test them in our test suite.