[RFC] Using basic block attributes to implement non-default floating point environment

Hi all,

This proposal is aimed at support of floating point environment, in which some properties like rounding mode or exception behavior differ from those used by default. This include in particular support of ‘pragma STDC FENV_ACCESS’, ‘pragma STDC FENV_ROUND’ as well as some other related facilities.

Problem

On many processors use of non-default floating mode requires modification of global state by writing into some register. It presents a difficulty for implementation as a floating point instruction must not be move to code which executes with different floating point state. To prevent from such moves, the current solution represents FP operations with special (constrained) instructions, which do not participate in optimizations (http://lists.llvm.org/pipermail/cfe-dev/2017-August/055325.html). It is important that the constrained FP operations must be used everywhere in entire function including inlined calls, if they are used in some part of it.

The main concern about such approach is performance drop. Using constrained FP operations means that optimizations on FP operations are turned off, this is the main reason of using them. Even if non-default FP environment is used in a small piece of a function, optimizations are turned off in entire function. For many practical application this is unacceptable.

Although this approach prevents from moving instructions, it does not prevent from moving basic blocks. The code that uses non-default FP environment at some point must set appropriate state registers, do necessary operations and then restore the original mode. If this activity is scattered by several basic blocks, block-level optimizations can break these arrangement, for instance a basic block with default FP operations can be moved after the block that sets non-default FP environment.

Solution

The proposed approach is based on extension of basic blocks. It is assumed that code in basic block is executed in the same FP environment. The assumption is consistent with the rules of using ‘pragma STDC FENV_ACCESS’ and similar facilities. If the environment differs from default, such block has pointer to some object that keeps the block attributes including FP settings. All basic blocks, obtained from the same block where ‘pragma STDC FENV_ACCESS’ is specified, share the same attribute object. In bytecode these attributes are represented by metadata attached to the basic blocks.

With basic block attributes compiler can assert validity of an instruction move by comparing attributes of source and destination BBs. An instruction should keep pointer to BB attributes even if it is detached from BB, to support common technique of moving instructions. Similarly compiler can verify validity of BB movement.

Such approach allows to develop implementation in which constrained FP operations are ‘jailed’ in their basic blocks. Other part of the function can still use usual FP operations and get profit of optimizations. Depending on the target hardware some FP operations may be allowed to cross the ‘jail’ boundary, for instance, it they correspond to instructions which directly encode rounding mode and FP environment change rounding mode only.

Is this solution feasible? What are obstacles, difficulties or drawbacks for it? Are there any improvements for it? Any feedback is welcome.

Hi all,

This proposal is aimed at support of floating point environment, in which some properties like rounding mode or exception behavior differ from those used by default. This include in particular support of 'pragma STDC FENV_ACCESS', 'pragma STDC FENV_ROUND' as well as some other related facilities.

Problem

On many processors use of non-default floating mode requires modification of global state by writing into some register. It presents a difficulty for implementation as a floating point instruction must not be move to code which executes with different floating point state. To prevent from such moves, the current solution represents FP operations with special (constrained) instructions, which do not participate in optimizations (http://lists.llvm.org/pipermail/cfe-dev/2017-August/055325.html). It is important that the constrained FP operations must be used everywhere in entire function including inlined calls, if they are used in some part of it.

The main concern about such approach is performance drop. Using constrained FP operations means that optimizations on FP operations are turned off, this is the main reason of using them. Even if non-default FP environment is used in a small piece of a function, optimizations are turned off in entire function. For many practical application this is unacceptable.

The reason, as you're likely aware, that the constrained FP operations must be used within the entire function is that, if you mix the constrained FP operations with the normal ones, there's no way to prevent code motion from intermixing them. The solution I recall being discussed to this problem of a function which requires constrained operations only in part is outlining in Clang - this does introduce function-call overhead (although perhaps some MI-level inlining pass could mitigate that in part), but otherwise permits normal optimization of the normal FP operations.

Although this approach prevents from moving instructions, it does not prevent from moving basic blocks. The code that uses non-default FP environment at some point must set appropriate state registers, do necessary operations and then restore the original mode. If this activity is scattered by several basic blocks, block-level optimizations can break these arrangement, for instance a basic block with default FP operations can be moved after the block that sets non-default FP environment.

Can you please provide some pseudocode to illustrate this problem? Moving basic blocks moves the instructions within them, and I don't see how our current semantics would prevent illegal reorderings of the instructions but not prevent illegal reorderings of groups of those same instructions. At the LLVM level, we currently model the FP-environment state as a kind of memory, and so the operations which adjust the FP-environment state must also be marked as writing to memory, but that's true with essentially all external program state, and that should prevent all illegal reordering.

Thanks,

Hal

Solution

The proposed approach is based on extension of basic blocks. It is assumed that code in basic block is executed in the same FP environment. The assumption is consistent with the rules of using 'pragma STDC FENV_ACCESS' and similar facilities. If the environment differs from default, such block has pointer to some object that keeps the block attributes including FP settings. All basic blocks, obtained from the same block where 'pragma STDC FENV_ACCESS' is specified, share the same attribute object. In bytecode these attributes are represented by metadata attached to the basic blocks.

With basic block attributes compiler can assert validity of an instruction move by comparing attributes of source and destination BBs. An instruction should keep pointer to BB attributes even if it is detached from BB, to support common technique of moving instructions. Similarly compiler can verify validity of BB movement.

Such approach allows to develop implementation in which constrained FP operations are 'jailed' in their basic blocks. Other part of the function can still use usual FP operations and get profit of optimizations. Depending on the target hardware some FP operations may be allowed to cross the 'jail' boundary, for instance, it they correspond to instructions which directly encode rounding mode and FP environment change rounding mode only.

Is this solution feasible? What are obstacles, difficulties or drawbacks for it? Are there any improvements for it? Any feedback is welcome.

Thanks,
--Serge

Hi all,

This proposal is aimed at support of floating point environment, in which some properties like rounding mode or exception behavior differ from those used by default. This include in particular support of 'pragma STDC FENV_ACCESS', 'pragma STDC FENV_ROUND' as well as some other related facilities.

Problem

On many processors use of non-default floating mode requires modification of global state by writing into some register. It presents a difficulty for implementation as a floating point instruction must not be move to code which executes with different floating point state. To prevent from such moves, the current solution represents FP operations with special (constrained) instructions, which do not participate in optimizations (http://lists.llvm.org/pipermail/cfe-dev/2017-August/055325.html). It is important that the constrained FP operations must be used everywhere in entire function including inlined calls, if they are used in some part of it.

The main concern about such approach is performance drop. Using constrained FP operations means that optimizations on FP operations are turned off, this is the main reason of using them. Even if non-default FP environment is used in a small piece of a function, optimizations are turned off in entire function. For many practical application this is unacceptable.

The reason, as you're likely aware, that the constrained FP operations must be used within the entire function is that, if you mix the constrained FP operations with the normal ones, there's no way to prevent code motion from intermixing them. The solution I recall being discussed to this problem of a function which requires constrained operations only in part is outlining in Clang - this does introduce function-call overhead (although perhaps some MI-level inlining pass could mitigate that in part), but otherwise permits normal optimization of the normal FP operations.

Johannes and I discussed the outlining here offline, and two notes:

1. The outlining itself will prevent the undesired code motion today, but in the future we'll have IPO transformations that will need to be specifically taught to avoid moving FP operations into these outlined functions.

2. The outlined functions will need to be marked with noinline and also noimplicitfloat. In fact, all functions using the constrained intrinsics might need to be marked with noimplicitfloat. The above-mentioned restrictions on IPO passes might be conditioned on the noimplicitfloat attribute.

-Hal

Although this approach prevents from moving instructions, it does not prevent from moving basic blocks. The code that uses non-default FP environment at some point must set appropriate state registers, do necessary operations and then restore the original mode. If this activity is scattered by several basic blocks, block-level optimizations can break these arrangement, for instance a basic block with default FP operations can be moved after the block that sets non-default FP environment.

Can you please provide some pseudocode to illustrate this problem? Moving basic blocks moves the instructions within them, and I don't see how our current semantics would prevent illegal reorderings of the instructions but not prevent illegal reorderings of groups of those same instructions. At the LLVM level, we currently model the FP-environment state as a kind of memory, and so the operations which adjust the FP-environment state must also be marked as writing to memory, but that's true with essentially all external program state, and that should prevent all illegal reordering.

Thanks,

Hal

Solution

The proposed approach is based on extension of basic blocks. It is assumed that code in basic block is executed in the same FP environment. The assumption is consistent with the rules of using 'pragma STDC FENV_ACCESS' and similar facilities. If the environment differs from default, such block has pointer to some object that keeps the block attributes including FP settings. All basic blocks, obtained from the same block where 'pragma STDC FENV_ACCESS' is specified, share the same attribute object. In bytecode these attributes are represented by metadata attached to the basic blocks.

With basic block attributes compiler can assert validity of an instruction move by comparing attributes of source and destination BBs. An instruction should keep pointer to BB attributes even if it is detached from BB, to support common technique of moving instructions. Similarly compiler can verify validity of BB movement.

Such approach allows to develop implementation in which constrained FP operations are 'jailed' in their basic blocks. Other part of the function can still use usual FP operations and get profit of optimizations. Depending on the target hardware some FP operations may be allowed to cross the 'jail' boundary, for instance, it they correspond to instructions which directly encode rounding mode and FP environment change rounding mode only.

Is this solution feasible? What are obstacles, difficulties or drawbacks for it? Are there any improvements for it? Any feedback is welcome.

Thanks,
--Serge

The main concern about such approach is performance drop. Using constrained FP operations means that optimizations on FP operations are turned off, this is the main reason of using them. Even if non-default FP environment is used in a small piece of a function, optimizations are turned off in entire function. For many practical application this is unacceptable.

The reason, as you’re likely aware, that the constrained FP operations must be used within the entire function is that, if you mix the constrained FP operations with the normal ones, there’s no way to prevent code motion from intermixing them.

This proposal presents a way to prevent such intermixing. In some basic block we use normal FP operations, in others - constrained, BB attributes allows to check validity of instruction moves.

The solution I recall being discussed to this problem of a function which requires constrained operations only in part is outlining in Clang - this does introduce function-call overhead (although perhaps some MI-level inlining pass could mitigate that in part), but otherwise permits normal optimization of the normal FP operations.

Johannes and I discussed the outlining here offline, and two notes:

  1. The outlining itself will prevent the undesired code motion today, but in the future we’ll have IPO transformations that will need to be specifically taught to avoid moving FP operations into these outlined functions.

  2. The outlined functions will need to be marked with noinline and also noimplicitfloat. In fact, all functions using the constrained intrinsics might need to be marked with noimplicitfloat. The above-mentioned restrictions on IPO passes might be conditioned on the noimplicitfloat attribute.

Outlining is an interesting solution but unfortunately it is not an option for processors for machine learning. Branching is expensive on them and some processors do not have call instruction, all function calls must be eventually inlined. On the other hand rounding control is especially important in such processors, as they usually operate short data types and using proper rounding mode can gain precision. They often allow encoding rounding mode in instruction and making a call just to execute a couple of instructions is not acceptable.

Although this approach prevents from moving instructions, it does not prevent from moving basic blocks. The code that uses non-default FP environment at some point must set appropriate state registers, do necessary operations and then restore the original mode. If this activity is scattered by several basic blocks, block-level optimizations can break these arrangement, for instance a basic block with default FP operations can be moved after the block that sets non-default FP environment.

Can you please provide some pseudocode to illustrate this problem? Moving basic blocks moves the instructions within them, and I don’t see how our current semantics would prevent illegal reorderings of the instructions but not prevent illegal reorderings of groups of those same instructions. At the LLVM level, we currently model the FP-environment state as a kind of memory, and so the operations which adjust the FP-environment state must also be marked as writing to memory, but that’s true with essentially all external program state, and that should prevent all illegal reordering.

Let’ consider a transformation like LoopUnswitch. The source:

for (int i = 0; i < N, ++i) {
#pragma STDC FENV_ACCESS
set_fp_environment(X);
if (i > K)
some_func();
// Basic block that calculates condition starts here.
bool f = float_a < float_b;
if (f)
do_1(i);
else
do_2(i);
}

As basic block that calculates condition f does not depend on values calculated in the loop, it can be hoisted:

bool f = float_a < float_b;
if (f) {
for (int i = 0; i < N, ++i) {
#pragma STDC FENV_ACCESS
set_fp_environment(X);

if (i > K)
some_func();
do_1(i);

}

} else {
for (int i = 0; i < N, ++i) {
#pragma STDC FENV_ACCESS
set_fp_environment(X);

if (i > K)
some_func();
do_2(i);

}

}

Nothing prevents from moving the BB that calculates condition. The BB being moved does not have data dependencies that prohibit such relocation. The code does not adjust the FP-environment state so may be moved ahead of set_fp_environment. But the transformed code has different semantics, as f is calculated in different FP environment. To prevent from such transformations we would need to consider all FP operations as accessing FP state modeled as memory. It would prevent from any code reordering and result in performance drop.

>
> >
> > The main concern about such approach is performance drop. Using
> > constrained FP operations means that optimizations on FP operations are
> > turned off, this is the main reason of using them. Even if non-default FP
> > environment is used in a small piece of a function, optimizations are
> > turned off in entire function. For many practical application this is
> > unacceptable.
>
> The reason, as you're likely aware, that the constrained FP operations
> must be used within the entire function is that, if you mix the constrained
> FP operations with the normal ones, there's no way to prevent code motion
> from intermixing them.

This proposal presents a way to prevent such intermixing. In some basic
block we use normal FP operations, in others - constrained, BB attributes
allows to check validity of instruction moves.

I'm really unsure how feasible it is to look at basic block annotations
all the time. It might also interfere with CFG simplifications, e.g.,
block merging. Having "implicit" dependences is generally bad (IMHO).

> Johannes and I discussed the outlining here offline, and two notes:
>
> 1. The outlining itself will prevent the undesired code motion today, but
> in the future we'll have IPO transformations that will need to be
> specifically taught to avoid moving FP operations into these outlined
> functions.
>
> 2. The outlined functions will need to be marked with noinline and also
> noimplicitfloat. In fact, all functions using the constrained intrinsics
> might need to be marked with noimplicitfloat. The above-mentioned
> restrictions on IPO passes might be conditioned on the noimplicitfloat
> attribute.
>

Outlining is an interesting solution but unfortunately it is not an option
for processors for machine learning. Branching is expensive on them and
some processors do not have call instruction, all function calls must be
eventually inlined.

Would "really late" inlining be an option?

I’d like to emphasize that the constrained intrinsics prevent optimizations by default. We have a plan to go back and teach individual optimizations how to handle these intrinsics. The idea is that if an optimization knows nothing about the constrained intrinsics then it won’t try to transform them, but if an optimization has been taught to handle the intrinsics correctly then it isn’t limited by anything other than the semantics of the constraints. Once we’ve updated an optimization pass, it will be able to do everything with a constrained intrinsic that has the “relaxed” settings (“fpexcept.ignore” and “fpround.tonearest”) that it would be able to do with the regular operation.

This philosophy is key to the way that we’re approaching FPENV support. One of the primary goals is that any optimization that isn’t specifically aware of the mechanisms we’re using will automatically get conservatively correct behavior. The problem with relying on basic block attributes is that it requires teaching all current optimizations to look for the attribute.

We had a somewhat similar problem when we implemented Windows exception handling. The implementation introduced basic blocks that instructions shouldn’t be hoisted or sunk into. We ended up having to chase down a lot of cases where our rules were violated. I think this stems from not having a single place to check the legality of code motion.

-Andy

I’d like to emphasize that the constrained intrinsics prevent
optimizations *by default*. We have a plan to go back and teach
individual optimizations how to handle these intrinsics. The idea is
that if an optimization knows nothing about the constrained intrinsics
then it won’t try to transform them, but if an optimization has been
taught to handle the intrinsics correctly then it isn’t limited by
anything other than the semantics of the constraints. Once we’ve
updated an optimization pass, it will be able to do everything with a
constrained intrinsic that has the “relaxed” settings
(“fpexcept.ignore” and “fpround.tonearest”) that it would be able to
do with the regular operation.

The way I understood it, the constraint intrinsics are not the only
problem but the regular ones can be. That is, optimizations will move
around/combine/replace/... regular floating operations in the presence
of constraint intrinsics because they do not impact each other (other
than def-use). If that understanding is correct, and this is a problem,
then I doubt that we want basic block attributes. Also, given that the
constraint intrinsics are inaccessible_mem_only, optimizations will work
with them as they work with other opaque instructions for which certain
effects are known.

(Btw. is it intentional that these can unwind?)

This philosophy is key to the way that we’re approaching FPENV
support. One of the primary goals is that any optimization that isn’t
specifically aware of the mechanisms we’re using will automatically
get conservatively correct behavior. The problem with relying on basic
block attributes is that it requires teaching all current
optimizations to look for the attribute.

Agreed, totally.

We had a somewhat similar problem when we implemented Windows
exception handling. The implementation introduced basic blocks that
instructions shouldn’t be hoisted or sunk into. We ended up having to
chase down a lot of cases where our rules were violated. I think this
stems from not having a single place to check the legality of code
motion.

Agreed. Outlineing seems a reasonable approach to avoid code motion or
at least restrict the locations that need to know about the constraints.
Given that we already have no implicit float, it seems natural to use it
here and make sure IPOs honor it.

Cheers,
  Johannes

The way I understood it, the constraint intrinsics are not
the only problem but the regular ones can be. That is,
optimizations will move around/combine/replace/... regular
floating operations in the presence of constraint intrinsics
because they do not impact each other (other than def-use).
If that understanding is correct, and this is a problem, then I
doubt that we want basic block attributes. Also, given that the
constraint intrinsics are inaccessible_mem_only, optimizations
will work with them as they work with other opaque instructions
for which certain effects are known.

Right. The motion of first class FP operations is nearly unrestricted. In particular, there is nothing to prevent them from moving past a call to something like fesetround() (or an architecture-specific intrinsic that does something like that). There shouldn't be any calls to fesetround() outside of a block where FENV_ACCESS is enabled, but we need a call like that to act as a barrier, so we need constrained intrinsics for the regular unconstrained operations in mixed modes so that we can restrict their movement. This would imply that calling fesetround() should trigger the strictfp mode in a function the same way that having constrained intrinsics does. That feels a bit like pulling a loose thread on a sweater, but we should think about it.

(Btw. is it intentional that these can unwind?)

I think so. If you unmask FP exceptions most of the constrained intrinsics might trigger a signal. I don't know if that needs to be modeled as unwind on Unix systems (probably not?), but on Windows I'm pretty sure it can be caught by SEH.

Agreed. Outlineing seems a reasonable approach to avoid code motion
or at least restrict the locations that need to know about the constraints.
Given that we already have no implicit float, it seems natural to use it
here and make sure IPOs honor it.

I have some reservations about outlining. I guess it solves the immediate problem, but as Serge noted it isn't friendly to all targets, and I think the call overhead would often be an issue even on systems that can handle calls.

How would the front end decide when to outline? Might it be better to let the user make that choice and provide a mechanism to mark a function for "very late inlining"?

-Andy

For the record, I'd love to have a "inline late" attribute for other purposes as well :slight_smile:

Basic block attributes would be a pretty major change for LLVM. If we were to add something like this to LLVM, it should be really well designed, and support other use cases beyond just the FP environment.

General regions could support things like:

  • replace lifetime.start/end

  • asynch exception handling

  • better windows EH

  • better replacement for inalloca

  • I’m sure there are use cases in parallelization that I’m unfamiliar with

As is, no, I don’t think we should put attributes on blocks.

+1

We have a variety of code motion issues to solve for GPUs, but something block level won’t really help much.

-Matt

I see this approach is not supported, so I am trying to elaborate another solution.
Nevertheless I’d like to address some comments, just for references.

I’d like to emphasize that the constrained intrinsics prevent optimizations by default. We have a plan to go back and teach individual optimizations how to handle these intrinsics.

The idea is that if an optimization knows nothing about the constrained intrinsics then it won’t try to transform them, but if an optimization has been taught to handle the intrinsics correctly then it isn’t limited by anything other than the semantics of the constraints. Once we’ve updated an optimization pass, it will be able to do everything with a constrained intrinsic that has the “relaxed” settings (“fpexcept.ignore” and “fpround.tonearest”) that it would be able to do with the regular operation.

This work is necessary for any approach, but for the current is is vital. As constrained intrinsics are used in entire function body, the code base where the solution must work correctly and fast is larger. The performance drop make this solution inappropriate for many users, they wouldn’t use it until the performance become close to the case without constrained intrinsics. In contrast basic block attributes limit the constrained intrinsics with only part of function code. It would be easier to make the solution suitable for use in production code.

Of course, when reasoning about performance, it would be nice to have numbers.

This philosophy is key to the way that we’re approaching FPENV support. One of the primary goals is that any optimization that isn’t specifically aware of the mechanisms we’re using will automatically get conservatively correct behavior. The problem with relying on basic block attributes is that it requires teaching all current optimizations to look for the attribute.

All these optimizations must be eventually modified in the current approach as well. If a transformation makes dangerous instruction or basic block move it must be taught to process constrained intrinsics correctly, or it becomes a source of performance drop.

But you are right, implementation of basic block attributes require implementation of mechanism that checks validity of instruction and basic block moves. After it is implemented, the search of the places where transformation require modification become simpler.

The way I understood it, the constraint intrinsics are not the only
problem but the regular ones can be. That is, optimizations will move
around/combine/replace/… regular floating operations in the presence
of constraint intrinsics because they do not impact each other (other
than def-use). If that understanding is correct, and this is a problem,
then I doubt that we want basic block attributes.

Basic block attributes allows to partition function code into realms, where FP operation is represented by either constrained intrinsic or by regular node. Code that moves instructions checks if particular instruction is allowed to pass realm boundary. This mechanism prevents from mixing constrained intrinsics with regular FP nodes, but still allows optimizations like inlining.

Outlining is an interesting solution but unfortunately it is not an option
for processors for machine learning. Branching is expensive on them and
some processors do not have call instruction, all function calls must be
eventually inlined.

Would “really late” inlining be an option?

Late inlining means fewer optimization possibilities. If resulting code represents a single function (as in the case of kernels) it is usually more profitable to do early inlining.

Thanks,
–Serge

+1