RFC: attribute synthetic("reason")


I would like to propose that we add the following function attribute to LLVM:


This attribute can only be applied to functions. It is not a semantic statement about the function it decorates. It is, instead, an explicit directive to LLVM to not attempt to propagate information about the function body outside of the function, including by changing the attributes of the function. The expectation is that some special pass will eventually remove the attribute and enable normal optimization.

So, why should we add this?

Problem: coroutine structure

I’ve recently been working on implementing coroutines for Swift. This involves embracing and extending Gor’s excellent work on LLVM coroutines with an alternate code-generation pattern and ABI. (*) LLVM doesn’t natively support coroutines, which means that a pre-split coroutine represented in IR looks rather artificial.

(* This work has been happening out-of-tree while I qualify the design with Swift frontend support. That doesn’t mean it’s closed-source; if you’re curious, you can see the current documentation update here:
or just check out the implementation:

I promise it will be submitted to LLVM when we’ve proven the design to our satisfaction by starting to use it in the frontend. That should be within the next few months.)

For example, here’s a simple coroutine:

define i8* @f(i8* %buffer, i32 %n) {


%id = call token @llvm.coro.id.retcon(i32 8, i32 4, i8* %buffer, i8* bitcast (i8* (i8*, i1)* @prototype to i8*), i8* bitcast (i8* (i32)* @allocate to i8*), i8* bitcast (void (i8*)* @deallocate to i8*))

%hdl = call i8* @llvm.coro.begin(token %id, i8* null)

br label %loop


%n.val = phi i32 [ %n, %entry ], [ %inc, %resume ]

call void @print(i32 %n.val)

%unwind0 = call i1 (…) @llvm.coro.suspend.retcon.i1()

br i1 %unwind0, label %cleanup, label %resume


%inc = add i32 %n.val, 1

br label %loop


call i1 @llvm.coro.end(i8* %hdl, i1 0)



In this function body, the coroutine intrinsics are structurally special: they are tied to the identity of this function and cannot be independently understood. This is true in the same general sense that “ret” and “unwind” are structurally special in an ordinary function. The control flow and semantics of this function are internally consistent and correct, and accordingly most intra-procedural analyses will not pose a problem. However, its interaction with other functions, and especially with its caller, cannot be understood by a naive function-wide analysis which does not know about these intrinsics. Such an analysis is likely to take actions that would cause a miscompile.

For example:

  • It is not legal to clone these intrinsics into another function, e.g. during inlining. A function is a coroutine if (and only if) it contains an @llvm.coro.id.* call. Cloning the ‘id’ call into the caller would turn that function into a coroutine, which is not correct. Similarly, cloning a ‘suspend’ into the caller would probably create malformed IR and, if not, would be semantically incorrect in the same way as cloning a “ret” into the caller would be.

  • Some of these intrinsics will be expanded to things that are not ordinarily legal for functions to expand to. For example, the llvm.coro.suspend call will expand into a code sequence that contains a return. The llvm.coro.end call also expands to contain a return, but with different semantics. An analysis that does not know that might add the ‘noreturn’ attribute to this function, which will then be incorrect after expansion. If a caller propagates that information to its call site, it will be badly miscompiled.

We need some way to stop these things from happening.

Should we just update existing passes to know about these intrinsics?

This is a reasonable question, but I am quite convinced the answer is “no”.

Even just considering coroutines, I think this is not really a reasonable request. There may be many different places that would need to updated to know about most of these intrinsics. Moreover, it is likely that the set of intrinsics will grow and change over time, because the details of LLVM coroutine lowering are very much in flux, for a pair of reasons:

  • First, there isn’t a standard, widely-accepted ABI for coroutines. For ordinary functions, there’s general agreement on concepts like “return addresses”, “return registers”, “argument registers”, “argument areas”, and “the stack”, even if there’s a lot of target-specific variation in how to apply them. But with coroutines, there’s a lot of room for debate about how to do even basic things like allocating space for the execution record, yielding and resuming control, and passing values in and out of the coroutine. I’m already adding a second ABI for Swift (in fact, I’m really adding two closely-related ABIs), and more will come after that. These ABIs have trade-offs with each other, and I don’t think that any one of them will eventually “win”; I think we’ll just have to support them all, as they get adopted in different languages.

  • Even if there were agreement about the ABI, it’s not clear to me that there’s agreement about how best to compile and optimize them. As we gain more experience with this, it is not unlikely that we will want to evolve the intrinsics. I don’t think we want to encumber all this experimentation with the need to go update a ton of analyses, especially when the right answer across all of them is generally just “ignore the function for now”.

Moreover, I doubt that coroutines are the only feature with a similar set of problems. My experience with compilers that have tried to use LLVM IR for higher-level analyses and optimizations is that they often want some way of explicitly limiting LLVM’s ability to analyze their functions until they’ve gotten to a certain stage in the lowering. This becomes more and more likely the more structurally interesting the IR gets.

Aren’t there already ways to make a function opaque?

Well, yes, but they’re all independently meaningful, which is a problem. You can block inter-procedural analysis by giving a function weak linkage, but at the end of the day, it’s not okay to make all coroutine functions weak! Coroutines ought to be able to have the same spectrum of linkages, visibilities, etc. as normal functions, so if we play linkage tricks to block analysis, we have to also find a way to tunnel the real linkage all the way to the coroutine lowering pass. The same thing applies for ‘noinline’ — there might be good reasons to make the coroutine function ‘noinline’ even after lowering, so we have to tunnel the real value through.

It’s possible to make the call indirect, e.g. by hiding the function pointer behind an intrinsic just in case it gets devirtualized, but this requires a lot of awkward casting (all of which needs to be removed in order to make the call properly direct if possible later), and it creates a bunch of extra IR that then needs to be removed at the right point in a careful phase-order with the lowering. Still, it works, which is why it’s how I’m currently trying to solve the problem. However:

None of those approaches stops passes from changing the function attributes based on the function body, like the (very real) example above of an analysis that decides that the coroutine cannot return. As far as I know, there’s no way to stop this; my current pass just has explicit code to remove ‘noreturn’ when doing the lowering, but that does not seem like a satisfactory solution.

That is why I think we should have a dedicated attribute whose sole purpose is to block analysis.


(Forwarding response to llvm-dev with Dave’s permission.)

Optnone stops these things, doesnt it? But then, as you say, you’d have to find some way to tunnels true optnone through to the coroutine expansion pass.

Right. More importantly, we definitely want to allow internal optimization of the coroutine function — mem2reg, GVN, etc. — prior to lowering. That should include inlining into it. The attribute should only block propagating information out of the function implementation.


That all makes sense.

I don’t think the name “synthetic” is all that intuitive, though. Enum attributes are pretty cheap, maybe we should try to use a name closer to what we’re trying to implement? For example, we could add a new “coroutine_foo” attribute for every coroutine style we implement. We would have analysis helper functions to answer questions like “is IPO through calls to this function legal” and “is IPO through calls in this function legal”, and that’s where we’d maintain the list of all coroutine-like attributes.

For example, I assume you want to inline simple getters and setters into a coroutine early before turning it into a state machine. In the long run, it’s not like we want to completely block IPO, so we’re going to need something with clearer semantics eventually.

That all makes sense.

I don’t think the name “synthetic” is all that intuitive, though. Enum attributes are pretty cheap, maybe we should try to use a name closer to what we’re trying to implement? For example, we could add a new “coroutine_foo” attribute for every coroutine style we implement. We would have analysis helper functions to answer questions like “is IPO through calls to this function legal” and “is IPO through calls in this function legal”, and that’s where we’d maintain the list of all coroutine-like attributes.

I specifically do not want to add a new coroutine attribute for each lowering. Adding an attribute is actually quite a bit of boilerplate even before you talk about updating any passes: in-memory representation, parsing, printing, serialization, and so on. Someone ought to be able to add a lowering without having to go through all that. In practice, the desired default behavior from the optimizer is exactly the same for every lowering: Don’t Make Assumptions About What You Can See In The Function Body.

That said, I can see your basic point: using a more general attribute sounds like premature generalization, and it would arguably outlaw coroutine-aware optimizations that propagate information out. I think the middle ground here is to make the attribute coroutine-specific but still make it parameterized by the lowering name, e.g. coroutine(“retcon”).


I support the notion of the general synthetic marker. We ended up developing something very similar in downstream patches related to our embedded high level IR abstraction notion. While we’ve been slowly migrating away from the original heavy use of such abstractions, they were an incredibly powerful tool for prototyping designs and figuring out what a good long term designed looked like. Having first class support for this type of design prototyping upstream seems like a wonderful idea to me.

Quick semantic question: Do you intend the marker to prevent removal of an otherwise internal function without uses? (i.e. can you custom passes insert uses?) We’ve found semantics both with and without to be useful. I’m just wanting to clarify what you intend. A reasonable design would be to have “synthetic” allow global dce, but add an orthogonal attribute for the alternate case. (We call ours “has-latent-use”.)