*Integer* a and b? What's the expected use case for this?

For 32-bit integers feeding into a IEEE-754 double calc, this might work as long as a, b are known >= 0; for doubles/floats, this is not true, even excluding NaNs; take this counterexample (for float32)

a = 1.0 - 0.5f*FLT_EPSILON // IEEE-754 float32: 0x3f7fffff

b = 1.0 - 1.0f*FLT_EPSILON // IEEE-754 float32: 0x3f7ffffe

sqrt(a) == sqrt(b) == a (in IEEE arithmetic, unless I messed up somewhere)

therefore a>b but sa == sb

[the equivalent example for doubles should also work]

It's not true for 64-bit integers when the computation is done using IEEE doubles, because there are pairs of integers a, b such that a > b but (double)a == (double)b.

As soon as rounding is involved, you _have_ to be prepared for strict inequalities turning into non-strict ones (at best).

The equivalent non-strict inequality (a >= b implying sqrt(a) >= sqrt(b) provided all values are ordered) is somewhat less dicey and might actually work provided the values are known non-negative, but I'd want to see a (preferably mechanical) proof first.

sin, cos, tan are all non-monotonic. After all, they're periodic! Before you can rewrite anything, even in ideal circumstances, you have to (at the very least) prove that all the operands in question are from an interval such that the restriction of sin (or cos, or tan) to said interval is monotonic. That's a _way_ stronger requirement than just "no NaNs".

And that's assuming the library implementations of sin/cos/tan are such that they are monotonic when restricted to an appropriate interval. I don't believe that's guaranteed anywhere. (But I'd be happy to b corrected on that point.)

-Fabian