Removing ReadOnly from math intrinsics

In include/llvm/IR/Intrinsics.td there is code to mark sqrt and several other math intrinsics as “ReadOnly”, even though they do not read memory.

According to the comments this was done as an attempt to model changes to the FP rounding mode. This is too conservative, and unnecessarily blocks transformations such as commoning and vectorization.

I have heard from others that FP environment changes are not well modeled on LLVM anyway, so perhaps it is appropriate to just change these from ReadOnly to ReadNone. Any opinions on this? If there are no objections I’ll prepare a patch.

The alternative would be to develop a mechanism to finely model FP environment changes; even further, it might be possible to come up with a unified model of library call side effects, including errno and even I/O.

Thanks,

While our current modeling isn’t quite right (e.g. we don’t model writes to errno and related state), I’m very reluctant to see us move in the direction you propose. I’m leary of having intrinsics which are modeled incorrectly and relying on the optimizers not to exploit that fact and yield incorrect code. This seems like a recipe for disaster long term. I know there had been some work discussed on list around modelling errno explicitly. I’m not sure what happened with that or what the current status is. I would be mildly supportive of an effort to add optional explicit alias sets to function declarations. I have an (out-of-tree) use-case which might benefit from such as well. Philip

errno is a totally separate issue. I started to confuse these when talking
with Raul, and sorry for that.

The intrinsics should (IMO) be modeled as *not* fiddling with errno at all.
That is already the case today, and we don't transform library calls into
intrinsic calls unless compiling under '-fno-math-errno' to explicitly say
this is allowed.

The interesting question is whether the floating point intrinsics "read"
the hardware rounding mode. I would very much like to say "no" because LLVM
has essentially no support for modeling hardware rounding modes in any
other context. For example, we don't model it for multiply and divide, so
modeling it for sqrt seems pointless and in impedes a huge number of
optimizations.

Right. Like Chandler is saying, these intrinsics are only used under fno-math-errno and are already properly modeled as not modifying errno.

The change being proposed only affects the dynamic change of hardware fp rounding modes, which is not currently supported by llvm (#pragma STDC FENV_ACCESS ON). We currently appear to be at the worst of both worlds as we do not have support for this environment but are always being constrained by it.

I would be perfectly fine with Raul & Chandler's proposal, provide that clear documentation is added. The strong distinction between standard library calls and intrinsics is an important point for front end authors. The deliberate ignorance of floating point environment flags is entirely defensible, but needs to be documented clearly. We should also document which rounding mode (and related bits of FP env state) that our optimizations assume.

Do we currently model flag writes for floating point? If so, does this effect the discussion?

Philip

Thank you Philip. I’ll prepare a patch and make sure there are proper comments.

AFAICT FP env updates are not being modeled at all, and in practice are only implicitly modeled as part of general storage updates. Long term it will be worthwhile to carefully model them, but that’s out of the scope of the proposed change.

Cheers,

I’ve prepared a patch for this change; please feel free to review/comment at http://llvm-reviews.chandlerc.com/D2670