[FPEnv] undef and constrained intrinsics?

How should the constrained FP intrinsics behave when called with an operand that is “undef” and the FP environment is not the default environment? I’m specifically working in the middle end passes if it matters. Let me start with the assumption that the rounding mode is not relevant. That still leaves the exception handling as a factor:

With “fpexcept.maytrap” we are allowed to drop instructions that could or would cause a trap at run-time. Does this imply we can fold the entire instruction to a new undef?

With “fpexcept.strict” we are not allowed to lose or reorder traps. So how does that affect undef? What happens in the backend? Perhaps the middle end should leave the instruction with the undef and let the backend do something reasonable?

The “maytrap” case is the one I’m most interested in. An earlier version of D103169 would fold away undef constrained intrinsics in the maytrap case. This was removed so it could be handled without affecting the rest of the patch I believe.

Opinions?

Can we use the regular FP instructions (fadd, fmul, etc.) as a model?

If both operands to any of the binops are undef, then the result is undef. So for the corresponding constrained intrinsic, if both operands are undef, the result is undef and the exception state is also undef:

%r = call float @llvm.experimental.constrained.fadd.f32(float undef, float undef, metadata !“round.dynamic”, metadata !“fpexcept.strict”)

%r = undef

%r = call float @llvm.experimental.constrained.fadd.f32(float undef, float undef, metadata !“round.dynamic”, metadata !“fpexcept.maytrap”)

%r = undef

If one operand is undef and the other is regular value, assume that the undef value takes on some encoding of SNaN:

%r = call float @llvm.experimental.constrained.fadd.f32(float undef, float %x, metadata !“round.dynamic”, metadata !“fpexcept.strict”)

%r = call float @llvm.experimental.constrained.fadd.f32(float SNaN, float %x, metadata !“round.dynamic”, metadata !“fpexcept.strict”) ; raise invalid op exception

(%r could be folded to QNaN here, but we can’t get rid of the call, so don’t bother?)

%r = call float @llvm.experimental.constrained.fadd.f32(float undef, float %x, metadata !“round.dynamic”, metadata !“fpexcept.maytrap”)


%r = QNaN ; exception state does not have to be preserved

Does that match the proposed behavior in https://reviews.llvm.org/D102673 (cc @sepavloff)?

We could go further (potentially reduce to poison) if we have fast-math-flags on the calls – just as we partially do with the regular instructions – but it probably doesn’t matter much to real code.

The concept of undefined value has always been obscure and caused many questions. I’d like to share my opinion, however I am not sure if I understand this concept correctly.

LLVM documentation (https://llvm.org/docs/LangRef.html#undefined-values) describes undefined values:
“Undefined values are useful because they indicate to the compiler that the program is well defined no matter what value is used”. So these are values on which the result of program execution does not depend. This is why an undefined value may be replaced by an arbitrary value of proper type and range. The choice of the replacement value is dictated mainly by convenience. If however the produced result depends on this choice, it means the value of undef affects results, so the initial supposition is broken and we have undefined behavior.

I agree with Sanjay that constrained intrinsics should behave in the same way as regular FP operations with respect to undef. Control modes (like rounding mode) influence result value, but we know that particular value of undef is not important. FP exceptions are a bit more complex. If the value of undef may be arbitrary, it is not possible to guarantee that FP exceptions would be the same for all possible values. So we can assume that undef operands do not affect FP exceptions. Either such operation is eliminated, because its value is not used, or the operation itself does not use the undef argument.

If any of standard IR FP operations has undef argument, the result may be either undef or any FP value. It is convenient to use NaN in such cases. It does not make the program more correct but it can help to detect undefined behavior in some FP environments. However undef result seems better choice than NaN, because in this case the user of undef value may choose a convenient representation for undef.

I do not see any reason to distinguish between the cases “all operands are undefs” and “only one operand is undef”. In both cases we get a value that is not used in the correct program.

So I would propose transformations:

%r = call float @llvm.experimental.constrained.fadd.f32(float undef, float undef, metadata !“round.dynamic”, metadata !“fpexcept.strict”)

%r = undef

And

%r = call float @llvm.experimental.constrained.fadd.f32(float undef, float %x, metadata !“round.dynamic”, metadata !“fpexcept.strict”)

%r = undef

What do you think about it?

Unfortunately, it’s not as easy as “any undef in → undef out”. That’s a big reason for moving away from undef in IR.

If you read this page bottom-up (there must be a better link somewhere?) and then read the follow-ups in the thread, you’ll see how we arrived at the current rules for the standard FP ops:
https://lists.llvm.org/pipermail/llvm-dev/2018-March/121481.html

Thank you for the reference. I saw an even older discussion on this topic in the IRC channel. It looks like the problem of understanding undef has been persisting since long ago. Probably it is because undef is “one of the set” value, but the set itself is not specified. For floating point values it generally includes all possible values, but for example if -fffast-math is in action, NaNs are not in this set.

Another source of problems is replacing undef with concrete value. It turns “one of the set” into one value and this contraction cannot be equally good for all cases. For example:

%A = select undef, %X, %Y
%B = select undef, %X, 42
%C = icmp eq %A, %B

Contraction of select instructions to the first operands, as recommended in LLVM Language Reference Manual would make the compiler deduce that %C is true, which is not correct in general case.

The concept of poison seems more clear and consistent. I wonder if we could make transformations like:

%r = fadd undef, %x

poison

and similar for constrained intrinsics. Using poison is consistent with using undef for values on which the result does not depend. When poison needs representation in machine code, it could be lowered to NaN, which behaves similarly in runtime. The same solution is already made for shufflevector. Does anything prevents from such transformation?

Hi Serge,

%r = fadd undef, %x

poison

Supporting this transformation is slightly complex because a value can be partially undef:

%r = fadd undef, %x ; let’s assume that this is poison
%r = fadd (or undef, 1), %x ; what about this?

%r = fadd (or undef, 0x7F…FF), %x ; this has a single undef bit only; maybe it isn’t undef enough to yield poison.

So, relying on ‘fadd poison %x → poison’ and making poison appear as frequently as possible might be a cleaner option (in my opinion).

About moving away from undef:
Let me share how things are going, since people might be interested in it.
There are three sources of undef currently, and for each of them some kind of progress has been made (thanks to reviewers and people):

  1. Undef is being used as a don’t-care value.
    A creation of a vector value is done via a sequence of insertelement, e.g., insertelement(insertelement undef, x, 1), y, 2.
    As shown in this expression, undef is used as a don’t-care value.
    There are a few patches landed to make instructions to use poison instead. For example, IRBuilder::CreateShuffleVector is now using poison for its second vector operand.
    However, there are so many places where UndefValue is being used, so they aren’t fully updated. :frowning:
    Updating transformations to use poison for insertelement/insertvalue/phi/etc’s don’t-care value will facilitate further optimizations.
    One successful case I observed was InstCombine’s unit test removing unreachable instructions after switching to poison.

  2. Undef value is used in the semantics of shufflevector’s undef mask.
    Shufflevector is currently defined to yield undef if the mask is undef.
    This is because optimizations want to regard shufflevector of a specific form to be equivalent to ‘insertelement undef, …’.
    If don’t-care values are fully updated to be poison (item 1), the semantics of shufflevector can be finally updated to return poison.
    There is another blocker though; X86-64’s mm*_undefined* intrinsics are supposed to return an uninitialized vector which is not undefined; unlike undef, each read should return a consistent value.
    Using shufflevector with undef mask to encode mm*_undefined* was already wrong, and making shufflevector return poison will make it worse.

  3. Undef value is used to represent the value of the uninitialized memory.
    Poison can be used instead, but (in the case of C/C++) two cases should be treated carefully:
    (1) Translating bit fields into IR: since there is no bitwise load and store in IR, poison bits can contaminate the whole loaded value.
    (2) A variable whose address is escaped: I heard from a few people studying C standard that an uninitialized variable whose address is escaped contains an unspecified value, which is more defined than both undef and poison IIUC.
    Precisely encoding this case will come at a cost.

Fully addressing these requires correctly understanding a number of transformations/analyses in LLVM; it is pretty scary to fix them as well :confused:

Best,
Juneyoung

I notice that we’re not currently folding “fadd undef, undef” to “undef”. Shouldn’t we be? See simplifyFPOp() in InstructionSimplify.cpp.

I like the idea of having the ebIgnore and ebMayTrap cases be handled the same as the regular FP instructions. Folding to a NaN seems reasonable, and that’s what the existing code does in the default FP environment.

For ebStrict, and possibly other cases, I like the idea of replacing the undef with an SNaN, but we would need to do it late and it would need to be done even when optimizations are turned off. I’m not sure what to do if the nnan fast math flag is present, though. Ignore it? If an undef reaches a backend then it seems like we have an error?

I notice that we’re not currently folding “fadd undef, undef” to “undef”. Shouldn’t we be? See simplifyFPOp() in InstructionSimplify.cpp.

If there’s some path where the “fadd undef, undef → undef” fold doesn’t happen, that seems like a bug.

InstSimplify should have called ConstantFolding on this before reaching simplifyFPOp(), so we could assert in simplifyFPOp() that we have at least one non-constant operand.

For ebStrict, and possibly other cases, I like the idea of replacing the undef with an SNaN, but we would need to do it late and it would need to be done even when optimizations are turned off. I’m not sure what to do if the nnan fast math flag is present, though. Ignore it? If an undef reaches a backend then it seems like we have an error?

It should always be safe to ignore/drop FMF, so yes, I’d ignore those while we make sure the non-FMF functionality is working.

Hi Juneyoung,

%r = fadd undef, %x

poison

Supporting this transformation is slightly complex because a value can be partially undef:

%r = fadd undef, %x ; let’s assume that this is poison
%r = fadd (or undef, 1), %x ; what about this?

%r = fadd (or undef, 0x7F…FF), %x ; this has a single undef bit only; maybe it isn’t undef enough to yield poison.

So, relying on ‘fadd poison %x → poison’ and making poison appear as frequently as possible might be a cleaner option (in my opinion).

Agree. Poison value cannot be partially poison and it simplifies things.

Let me share how things are going, since people might be interested in it.

It is a good idea to provide real use cases, thank you!

Fully addressing these requires correctly understanding a number of transformations/analyses in LLVM; it is pretty scary to fix them as well :confused:

We could do it for floating point operations only, to make the transition gradual.