RFC: token arguments and operand bundles

Hello everyone,

I’ve just uploaded a patch (https://reviews.llvm.org/D70261) to introduce a could of new token types to be used with constrained floating point intrinsics and, optionally, vector predicated intrinsics. These intrinsics may not be of interest to many of you, but I have a more general question.

I would like some general feedback on the way I am proposing to use token arguments and operand bundles. I have an incomplete understanding of how these are intended to be used, and I want to make sure what I have in mind is consistent with the philosophy behind them.

Currently, the constrained floating point intrinsics require string metadata arguments to describe the rounding mode and exception semantics. These “arguments” are really providing information to the optimizer about what it can and cannot assume when acting on these intrinsics. The rounding mode argument potentially overrides the default optimizer assumption that the “to nearest” rounding mode is in use, and the exception behavior argument overrides the default optimizer assumption that floating point operations have no side effects. I’ve never liked the use of strings here, and the fact that these arguments are not actually inputs to the operation represented by the intrinsic seems vaguely wrong.

A typical call to a current intrinsic looks like this:

From a front-end perspective, I think it’d be preferable if these either got encoded in the function name or were normal enum value arguments. It’s a bit awkward to expose things to the user that must be constant or of a special type or in a special metadata slot, since we now need more special support for it. If the optimization passes couldn’t identify a constant value for one of the arguments, these seem like they can fallback to assuming the most conservative semantics (of round.dynamic and fpexcept.strict–e.g. don’t optimize) without loss of precision or generality.


Let me clarify. These aren’t intended to be exposed to the user. The user code that leads to the generation of these intrinsics will be normal floating point operations combined with either pragmas (such as “STDC FENV_ACCESS ON”) or command line options (such as the recently introduced “-fp-model=strict”).

The reason I’ve been avoiding normal constant values is that it provides no information when you’re reading the IR. For example:

%sum = call double @llvm.experimental.constrained.fadd(double %x, double %y, i32 1, i32 2)

What does that mean? You’d need to consult an external reference to have any idea.


I understand that, but I think you missed my point. Not all front-ends are clang, and non-C frontends are also interested in this work. And even C might want to eventually want to be able to use these more generally. For example, the current C standard (afaik) doesn’t define what must happen if this pragma tried to use a non-literal constant, such as a template attribute as the arguments. But it’s not obvious to me why LLVM should inherit that limitation. Currently it seems to be implemented in a way that requires special handling in any front-end, influenced strong by the special handling it’s now getting in C. For other languages, it’s doable to expose this to users regardless, but if you’re already considering changing it, my vote would be to use a normal representation with first-class values.

However, I really appreciate the specifics on the concern you brought up, because that’s a good point. If it’s just about better IR printing, perhaps we can just address that directly?

Most simply, perhaps these calls could customize the printing to append a comment? Some places already do that, for example to show Function Attributes.

Similarly, but more major, LLVM could perhaps define a new “named constant” syntax for the parser format (either with special tokens like your current PR and/or that get defined elsewhere like existing global constants). Certain instructions (such as these) could then use the option to customize the printing of their arguments to use the named constant (after parsing, they’d just be a normal Constant—only printing would optionally use them to show the information better to the reader).

To me it seems constant tokens do the job just fine without the need for a custom code path.

We really have been trying to keep in mind that LLVM needs to support multiple front ends, which may be implementing different language standards. As much as possible, I’ve been trying to let the IEEE 754 spec drive my thinking about this, though I’ll admit that on a few points I’ve use the C99 spec as a sort of reference interpretation of IEEE 754.

LLVM’s IRBuilder has been recently updated to provide an abstraction layer between front ends and the optimizer. So, if you’re using IRBuilder, you set need to call setIsFPConstrained() then, optionally, IRBuilder::setDefaultConstraiedExcept() and/or setDefaultConstrainedRounding(). After that, calls to something like IRBuilder::CreateFAdd() will automatically create the constrained intrinsic with the appropriate constraints, regardless of how we end up representing them. If your front end isn’t using IRBuilder, I will admit it gets a bit more complicated.

I wouldn’t be opposed to a solution that involved a custom printer for these arguments, but I don’t think it really adds anything that we wouldn’t get from using tokens as I have proposed. Likewise with the named constant idea. On the other hand, if I’m misusing tokens then maybe what constants would add is a way to avoid that misuse.

Regarding the question of what is exposed to users and how, that’s mostly up to the front end. I would like to clarify how we intend for this to work, in general. Simon touched on this briefly, but I’d like to be a bit more verbose to make sure we’re all on the same page.

There are effectively two distinct modes of source code translation to IR with respect to floating point operations – one where the user is allowed to modify the floating point environment and one where they are not. This may not have been clear to everyone, but by default LLVM IR carries with it the assumption that the runtime rounding mode is “to nearest” and that floating point operations do not have side effects. This was only documented recently, but this is the way the optimizer has always behaved. In this default mode, the IR shouldn’t change the floating point environment. I would encourage front ends to document this more specifically saying that the user is not permitted to change the FP environment.

This leads to the necessity of a second state in which the optimizer does not assume the default rounding mode and does not assume that floating point operations have no side effects. Proscribing these assumptions limits optimization, so we want to continue allowing the assumptions by default. The state where the assumptions are not made is accomplished through the use of constrained intrinsics. However, we do not wish to completely eliminate optimizations in all cases, so we want a way to communicate to the optimizer what it can assume. That is the purpose of the fpround and fpexcept arguments. These are not intended to control the rounding mode or exception reporting. They only tell the compiler what it can assume.

Understanding this, front ends can control these in any way they see fit. For instance, the front end might have a global setting the changes the rounding mode to “toward zero.” In that case, it would create constrained intrinsics for all FP operations and set the rounding mode argument (however we end up representing in) to rmTowardZero (a constant currently defined by LLVM corresponding to the “fpround.towardzero” metadata argument). Then the optimizer can use this information to perform optimizations like constant folding.

Runtime changes to the rounding mode are a separate matter. As I said above, I think front ends should define clear circumstances under which such changes are permitted, but the mechanism for making such changes is independent of the constrained FP intrinsics. For example, consider the following C function.

double foo(double A, double B, double C) {

int OrigRM = fegetround();


double tmp = A + B;


return tmp + C;


Assuming the compiler was in a state where it knew fenv access was enabled, I would expect that to get translated to something like this (after SROA cleanup):

define double @foo(double %A, double %B, double %C) {

%orig.rm = call i32 @fegetround()

%ignored = call i32 @fesetround(i32 3072)

%tmp = call double @llvm.experimental.constrained.fadd(double %A, double %B) [ “fpround”(token rmDynamic), “fpexcept”(token rmStrict) ]

%ignored = call i32 @fesetround(i32 %orig.rm)

%result = call double @llvm.experimental.constrained.fadd(double %tmp, double %C) [ “fpround”(token rmDynamic), “fpexcept”(token rmStrict) ]


Notice here the literal constant that C defines is still used for the call to fesetround(FE_TOWARDZERO), and the variable is used for the call that restores the rounding mode. Also notice that in both fadd operations, the rounding mode is declared as rmDynamic. I have an idea that we ought to have a pass that recognizes the fesetround library call an uses the information it finds there to change the rounding mode operand in the first fadd to rmTowardZero, but the front end won’t be expected to do that. We’ll probably want an intrinsic to change the rounding mode so that we don’t need to recognize all manner of language-specific libcalls, but that’s a problem for later.

I hope this has been more helpful than tedious. Also, I feel like I should reiterate that I am still seeking all opinions about the use of tokens and operand bundles or any other means of representing the fp constraints. I just want to make sure that we all have the same understanding of what the information I’m trying to represent in IR means.


As I recall, tokens were added largely as a mechanism to constrain particular users to particular single-entry regions of the CFG, and to do so in a way that guaranteed that later phases would be able to determine to which region any given use was tied. That sounds pretty different to me than the use here of wanting something that’s less string-y than metadata but prints as something less obtuse than an integer constant. I think it would functionally work to do what you’re proposing, but it strikes me as mixing unrelated concepts (one of which is already fairly esoteric) in a way that I for one would caution against. Your issue here strikes me as similar to what ‘immArg’ addresses, and I’d imagine that having a way to constrain particular ‘immArg’ arguments to particular known enumerations of values in a way that prints legibly is something that could have other uses as well and involve less conceptual gymnastics. I’ll also note that with the CFG-centric view of tokens, it makes sense that the only constant token is ‘None’ – you only need one way to indicate “not tied to any region”.

Bundles strike me as a better fit. What strikes me as a bit odd there is that bundles tend (or at least originally tended) to be used to communicate things orthogonal to the semantics of the call itself (or the operation represented by the call) – the ambient state of the virtual machine, the GC values that happen to be live across the call, the EH region that happens to contain the call. In this new case, the bundle is communicating something relevant to the semantics of the operation that the call represents… but it’s also true that, like the other examples, it’s communicating something about the context or ambient state at the point of the call, and that it (IIUC) would be applied consistently across the different fp operations… so that one passes my sniff test, for whatever that’s worth.

I’m also wondering if “named metadata” could help with the readability issue.


Hi Simon! Yep, I too much appreciate that you asked for discussion. I’m hope I’m not giving you more than you bargained for! (I likely wouldn’t have seen this otherwise until the next LLVM release)

Yep, I think we’re on the same page about the expectations of these operands. I’m aware that the user and/or front-end needs to also explicitly set the state.

Yes, we use IRBuilder. But that’s not problem. The problem is that it assumes that all front-ends want to maintain this information as global lexical state until llvm lowering. That’s OK for clang, since it doesn’t currently do optimizations on an intermediate IR. But I’m arguing that it’d be easier for other front-ends to pick up this work too if LLVM uses the regular call-argument channel for this information. Currently, it seems it expects that all front-ends will do some sort of legalization and move this information into some sort of a side-channel (as we know that C99 currently specifies). That’s doable, it’d just be nicer if that was buried inside the llvm optimization passes that already know about it.

As such there is little value in allowing variable exception/rounding mode params for the contrained fp intrinsics - LLVM passes wouldn’t understand them (in all but trivial cases) and would have to assume the worst case (dynamic/strict) anyway.

We may have to agree to disagree here, but this is exactly what I’m asking for LLVM to do. I don’t want LLVM to complicate life just because all optimizations might not be applicable sometimes. There’s lots of optimizations that might not be applicable, and I want to avoid coding in the exceptions in my frontend if I don’t need to.

I’m not familiar enough with IEEE 754 to know if it specifies behaviors for the representation in the middle-end. I thought it talked more about allowable optimizations and required features than specific representation questions. Cherry-picking text, perhaps the closest to my argument would be the sentence in 4.2 that the “user can specify that the attribute parameter [from a] variable.” It’s not really saying that you have to be able to pass this in as a variable, but I’m going to pretend that I can make that claim anyways, haha.

uses the information it finds there to change the rounding mode operand in the first fadd to rmTowardZero

That sounds neat. I don’t think it should conflict with passing in the mode as a variable though. Since the langref says the value must agree with the current mode, I think it’d still be legal optimization to replace the argument value with a more precise one (either Value->Constant, or dynamic->known, or perhaps even replace it with undef if it sees that the mode must be wrong). If you don’t think that’s legal though, that would lend credibility to the need to use a custom token representation in an operand-bundle.

Also, I thought Simon seemed to be saying this analysis pass wouldn’t be legal (“there is no going back from dynamic fpexcept/fpround to constants”), but I think I must have misunderstood him, since I don’t think that’s what he meant.

To me it seems constant tokens do the job just fine without the need for a custom code path.

I guess I would have called custom tokens a custom code path too. I don’t think it’s a question about whether we need represent this custom information—we clearly do—just at which places in the pipelines and representations should there be customizations to hold the information.


Hi Joseph,

Thanks for the feedback. Ulrich Weigand raised the same concern about tokens in my Phabricator review. As I said there, I see the point. I’m definitely proposing a new use for the token type. For my use to be valid, I’d need to redefine token something like this: “a compiler-generated pseudo-value that is not intended to be represented in memory or register during program execution and can only be used by compiler-specific intrinsics and instructions.” I think that covers the existing uses correctly. It is a new definition, but not one that would require any changes to implement.

The other characteristics of tokens, such as not being allowed in PHI or select instructions, are exactly what I’m after. I want a type of value that means what the optimizer says it means and nothing else. However, I will admit that what I want is more like an immediate operand that can only have one of the small set of existing values. Unlike the tokens used by Windows exception handling and coroutine intrinsics, I don’t want these values to be part of a use-def chain, though perhaps Jameson does want that.

Anyway, I agree with your assessment that the operand bundle part of my proposal is on more solid ground than the token part.



Hi Jameson,

I don’t think I have a clear picture of what you want to be able to do from the front end. I’ll take a stab at it here, but maybe you could give me an example if what I’m saying is off base.

Modifying my earlier IR example with the assumption that we’d provide intrinsics for changing the rounding mode and use live values for the constraint operands, I’m imagining that you’re asking to be able to do something like this:

define double @foo(double %A, double %B, double %C) {

%orig.rm = call i32 @llvm.getround()

%new.rm = call i32 @llvm.setround(i32 2) ; 2=rmTowardZero (unlike C’s fesetround() llvm.setround returns the new rounding mode)

%tmp = call double @llvm.experimental.constrained.fadd(double %A, double %B, i32 %new.rm, i32 2) ; %new.rm = dynamically known, 2=strict

%restored.rm = call i32 @fesetround(i32 %orig.rm)

%result = call double @llvm.experimental.constrained.fadd(double %tmp, double %C, i32 %restored.rm, i32 2) ; %restored.rm = dynamically known, 2 = strict

ret double %result


If we had a way to get here, the constant folder could easily be taught to do what I suggested with regard to deducing the rounding mode from calls that set it. However, I think it would be an extremely onerous task for a front end to create the IR above. So, I think I must be wrong about this being what you want to do.

Regarding the IEEE 754, I have a hard time mapping its language to a front end implementation sometimes. I think it intends to leave implementation details completely up to the implementation. I frequently flip back and forth between the IEEE 754 spec and the C99 fenv spec trying to make sense of it. I believe that when section for of the IEEE spec talks about attributes that maps (mostly) to what the C99 spec calls the “floating point environment” (and we’ve adopted that terminology in the LLVM Language Reference). I would argue that what IEEE 754 refers to as the “dynamic mode variable” is either amorphous or target defined. For X86 targets, for instance, the MXCSR and FPCW/FPSW registers represent this “variable” at runtime. Are you suggesting that you’d like it to be an actual value represented in the module somewhere? I can see how that might be useful for a front end, but I don’t know how we’d handle it during machine code generation.

Basically, I think I need more context to understand your concerns.



I don’t (currently?) expect the front-end to generate that implicitly, but I want to make it easy for the user to (explicitly) be able to write code such as:

define double @foo(double %A, double %B, i32 %new.rm) {

%orig.rm = call i32 @llvm.getround()

call void @setround(i32 new.rm) ; maps to fesetround, perhaps through some sort of lookup table, such as load (gep @mode_table, %new.rm), or calls some other llvm intrinsic, the details don’t really matter

%tmp = call double @llvm.experimental.constrained.fadd(double %A, double %B, i32 %new.rm, i32 2) ; fpround(%new.rm), fpexcept.strict

%restored.rm = call i32 @fesetround(i32 %orig.rm)

ret double %result


With the knowledge that the optimizer might later discover the true value of %new.rm (because of inlining or AbstractCallSites or returns-arg(0), for example), but that it’s perfectly valid also for the optimizer to fail. I think this gives the front-end the most flexibility about how to use this, by putting off the legalization as late as possible in the optimization pipeline. I can do this with the current framework, but it just forces me to do the legalization much earlier and thus potentially miss out on optimization opportunities it might otherwise have been able to express.

This does seem onerous from a C99 front-end with the out-of-band #pragma or -fpmode globals, but then only making expressible out-of-band seems onerous to me. Another frontend could allow the user to write that IR quite directly:

@enum fpexcept ignore maytrap strict # IEEE enum, will be mapped to LLVM enum elsewhere in lowering
function foo(A, B, mode)
oldmode = getround()
tmp = fadd(A, B, mode, strict)

In practice, I suspect people would put abstractions and wrappers around this, so that—again, in practice—it’ll usually end up with a constant there. But unless something must be a constant due to backend constraints, why does LLVM need to enforce that?

I don’t specifically know how users will then take and use that, I just want it to be as flexible as it can be, so that our community can experiment with DSLs and compiler plugins and see what ends up working best.

Actually, the use of %orig.rm in your example is interesting too, since it seems nifty how it could be used to represent the fpround inference pass explicitly. This assumes there was a function getround (possibly intrinsic) that could recover the LLVM enum value for the dynamic environment. Although I’m not sure there’d be any current uses of that (since it’d be the same as just specifying “dynamic”), it’s just intended to show why I think it could be reasonable to represent this way.

%rm = call i32 @llvm.getround()

%tmp = call double @llvm.experimental.constrained.fadd(double %A, double %B, i32 %orig.rm, i32 2) ; fpround(%rm), fpexcept.strict

Are you suggesting that you’d like it to be an actual value represented in the module somewhere? I can see how that might be useful for a front end, but I don’t know how we’d handle it during machine code generation.

Yes, I’m arguing for not ruling it out as a possibility too soon, even though the current user (C99) always puts a constant there. It’d still be handled the same way the handle the side-channel information is now: eventually just drop it. We agree the langref says it’s up to the user to make sure the dynamic mode is correct (e.g. MXCSR and FPCW/FPSW on X86) and matches whatever constant is found here (or the constants must be “dynamic”+“strict”)—the information is only needed for optimization, right? It might be different if we didn’t already have flags that specified the information wasn’t statically available for optimization. But it sounds to me from the langref that the backend already must know how to deal with the dynamic case for all these, so not exposing that to the front-end (as a variable argument) feels artificial limiting—and potentially even inhibiting of optimization opportunities.

Well, if it makes people think about the constrained fp design i’d say it’s worth it. Constrained fp doesn’t exactly get the crowds excited :wink: When a frontend wants to emit fp instructions that may operate in a non-default fp env, they can set the IRBuilder flags to"fpexcept.strict’ “fpround.dynamic”. You can have an analysis that later refines those values where possible (eg by inspecting calls to “fesetround” or “fesetexceptflag” or whatever libcall that particular language uses), including lowering to default-env fp ops. Sure, an analysis that understands the fp environment and refines the except/rounding parameters is possible (eg round.dynamic > round.tonearest when all reaching control paths configure the fpenv that way). What i was saying is that from the moment we allow variable except/rounding params in constrained fp intrinsics - syntactically in the IR - we’d have to support that… so there is a maintenance overhead to prematurely enable a feature that is unclear users will ever be asking for. Yep. Still, i think tokens constants are less invasive since then the customization is confined to the ‘token’ type as a kind of compiler-builtin enum type.

redefine token something like this…

The other characteristics of tokens…

FWIW, I’d flip those. In my mind, the defining property of tokens is that they are def/use links which compiler transformations are not allowed to obscure. That’s the part that defies typical expectations about values in the IR, and that absolutely is needed for their use cases.

[and I tend to think of constants as something outside of def/use-land, but I can appreciate that in LLVM they do get def/use links hooked up, so maybe what you’re proposing isn’t as unnatural in that context as it sounds to me…]

Another point that may be helpful to consider is, assuming that over time the set of things we want to model like this grows, whether one or the other representation would lend itself better to managing that. So e.g. to keep separate the constants for rounding-mode vs fp-exception vs funclet, do we want literal enums, or some sort of “TokenKind”, or predicates like isRoundingMode(ConstantToken tok), or just conventions in the names … and does the choice of using tokens or not for these make any of that easier or harder?

Related, experimental intrinsics are supposed to be the lightest-weight way to extend the IR, I think that would be a good thing to preserve. I get that the use of tokens would be opt-in so anybody is still free to make experimental intrinsics that don’t use them, but regardless of whether it’s tokens or immediates, if there were a way to opt into this that could be more declarative and not require changing the parsers/emitters and IR type hierarchies, that could be valuable.


so there is a maintenance overhead to prematurely enable a feature that is unclear users will ever be asking for.
I’m imagining it’s just a ConstantInt *fpround = dyn_cast<ConstantInt>(arg_end()[-2]) whenever you want to inspect it, and then treats it as dynamic/strict if it’s not visible. Are you thinking it’s something more? There’s also a maintenance overhead on front-end maintainers if it is more restrictive than required, so take this as a request for it to be permissive on behalf of future experimental users :slight_smile:

The exception/rounding params of constrained fp ops in LLVM are, in contrast, hints to the optimizer to enable certain optimizations in non-default fp envs.

Yes, but it sounds like they are a specification to the optimizer to help it understand what the runtime semantics will be (e.g. making the implicit config register explicit). I’m not sure I see what distinction you’re making, although I don’t want to be a spec-lawyer.

I don’t want these values to be part of a use-def chain, though perhaps Jameson does want that.

Partially. I want them to be valid syntax to be in a use-def chain, even if they don’t do anything unless there’s an actual constant that gets inserted there. I want that because it should allow optimizing strictly more programs than any proposal which is only capable of representing constants (and because I think it seems less complicated).

By making it accepting of a value, what optimization do you lose? It still should be able to do all of the optimizations available to Constants now (by checking if it does have a constant). But now I’d expect you can also optimize any program where the value became a constant only because of previous optimization passes—a benefit you’d just get for free from the existing passes! I know C99 was conservative here, and can’t explicitly represent the concept of the “fpenv” or “fpexcept” state being a specific variable (just a pragma for on and off, and the compiler-specific fp-model command line options). But as this work as progressed, I’m hoping it’ll start to make the feature more accessible to more people (even if the total number of people that will ever numerically care about the difference is tiny).

I agree an operand bundle could be used. I suppose those might be rather like making it a keyword argument: still just conceptually a regular Use in the IR, just enumerated in a different list (by tag-name instead of order)? It doesn’t seem apparent to me why it would be better, unless we want it to eventually be meaningful on any call in addition to supporting passing in values?

But Joseph’s description of it makes me think of a general purpose call / function attribute (defined as giving the fp settings at entry). I can hypothesis what that attribute could mean when applied to an arbitrary function or call, though I’m not sure what frontend would find it useful to emit it. I think that implementation could be capable of maintaining the same information (and get a constant name shown in the IR printer), so it seems doable, although perhaps not quite trivial to implement in LLVM (because the inlining pass would need to be taught all about it, and there’s a slightly higher risk some pass might decide to drop all metadata and lose it).

To attempt to summarize, since I’ve been attempting to make a number of philosophical points here (each of these written as pros for making it a variable Argument vs. requiring it to be a constant string):

  1. Most importantly I think it makes sense because lowering seems to already have support for it (because it’s equivalent to dynamic and strict), then in no particular order:
  2. I think it seems like it could be a better model for the implementation in physical hardware: this layer is pretty limited in options, so it has to be just a register somewhere/somehow. Yeah, it’s often just got a weird name and nemonic for ld/st, and it just shows up as an implicit argument when needed (plus, it doesn’t usually get modeled as such by the register allocator). But still basically just a regular Argument.
  3. I think it’s a slightly better model of the IEEE standard (or at least the two paragraphs I bothered to read): it says the hardware must support the value being given in a variable. It’d be a somewhat different situation if the hardware needed to select different instructions to support each mode (didn’t support dynamic mode).
  4. I still think it’d be simpler overall, but especially for front-ends: it doesn’t need a special side-channel in the front-end, or a legalization pass to know which constants are known to the optimization passes. Custom pretty-printers are a bit of extra work, true, but that isolates the complexity in one component in the backend (and apparently might find itself with others users).
  5. It’s more powerful (supports more optimizations): having them as regular Uses means that, in addition to constant strings typed in the source code or given on the command line, it can also represent variables that other optimization passes might turn into constants. (I generally also am wary of immArg for this reason too, although I recognize that there are—and I’ve written— cases where dynamic support really isn’t possible or easy and immArg is required. But in this particular case, the dynamic/strict support appears to already exists as part of the relevant standards, so I’m mainly asking why it shouldn’t be exposed that way at the IR level too.)


I’m going to mostly stay out of the floating point specifics, but I want to point out a few alternative approaches which have been used elsewhere in the past.

We have the ability to require a intrinsic function argument to be a constant. We have support for annotations printed during IR serialization. Putting those together, we could use an argument encoding + annotation printer to field something like:

%sum = call double @llvm.experimental.constrained.fadd(double %x,
double %y, int 0, int 1) ;; fpround.dynamic + fpexcept.strict

If we didn’t want to have token types involved, we could use operand bundles and a custom annotator. We’d get something like:

%sum = call double @llvm.experimental2.constrained.fadd(double %x,
double %y) [ “fenv”(i32 0, i32 1)] ;; fpround.dynamic + fpexcept.strict

</end alternatives>

Personally, I’d try to avoid token types here. Token types are relatively ill defined, and most of the cases we’ve used them have been a proxy for multiple variable return. In retrospect, I wish we’d done multiple variable return e.g. statepoints.


Based on what I’ve read in this thread, it doesn’t seem like tokens are a good fit here. If in the future we want to establish different regions of FP safety (i.e. inlining functions with different FP safety settings), then it might make sense to start using tokens.

I think we do want to support different regions of FP safety through inlining and other means, but I otherwise agree with your conclusion that tokens as I was using them are not a good fit. We should be able to handle different regions by using different constraint arguments (in whatever form).