Trapping math for RISC-V

Aarch64 supports floating-point exceptions in some hardware, but where it’s unsupported there’s no software-emulation implementation of traps. I’m not aware of any architectures where someone’s attempted to implement software-emulation of floating-point traps when the hardware doesn’t support them.

It’s more complex to implement than described above, because traps should not trigger based on accrued exception bits, but only on flags raised by the operation being executed.

Thus, to emulate this correctly, you need to

  1. Write zero to FFLAGS, and save the previous value into a register
  2. Execute the FP instruction
  3. Read the enabled-trap-bits from wherever that’s stored.
  4. AND new FFLAGS with those enabled-trap bits.
  5. OR the flags saved in step 1 back into current FFLAGS register.
  6. Branch over the call to the trap handler if the result of step 4 was zero.
  7. Call the trap handler, providing it the result of step 4, so it knows which flag to set in the siginfo_t passed to the signal handler.

You’ll also need to define an ABI for where the trap-enable bits are stored – possibly that must be handled by the kernel, since the value should be cleared and restored around signal handlers? Or maybe it can be handled by libc? And you’ll also define the ABI for how to invoke the trap handler…

This really seems an awful lot of complexity…and a lot of overhead. I would, instead, encourage everyone to just avoid the need to do this.

1 Like