@rjmccall or @efriedma-quic – do either of you have concerns as code owners for the current clang codegen? (Just want to make sure you’ve had a chance to consider the proposal now that conversations seem to be winding down.)
Right now, it sounds like there isn’t a viable path over the next 2-3 years for this to become part of the main code-generation pipeline, so (respectfully but honestly) this is really just adding another static analysis IR with a hope of eventually serving as an alternate pipeline. We should therefore analyze it on those terms. It seems to me that the costs of integrating it really come down to CI overhead and maintenance burden.
The maintenance burden depends mostly on our expectations for contributors. ClangIRGen would be a new AST consumer that would have to be updated for AST changes, but the AST has overall gotten quite stable. We can roughly categorize AST changes three ways:
- Small changes to the AST for non-feature-development purposes, like improving source fidelity. These usually don’t require code generator changes at all.
- Local changes to the AST for feature development, like adding a new attribute or a CastKind or a statement/expression form. These usually do end up requiring code generator changes, so the key question is whether contributors will be expected to implement features in ClangIRGen as well as CodeGen. I imagine the answer will be “no”, and that the expectation on general Clang contributors will just be that they’ll keep the existing ClangIR tests passing.
- Structural changes to the AST to better handle some cross-cutting complexity, like if we changed how we represent full-expressions in order to capture things like lifetime extension more explicitly. These usually require changes to all code generators simultaneously just to keep things working. Adding another code generator would be a substantial burden on these kinds of changes; fortunately, though, they’re rare in practice, and I think we’ve done a pretty good job of staying ahead of the need for this. (The closest I can think of is that compound assignments probably ought to change so that we can explicitly represent the implicit conversions on the LHS and value.)
The CI overhead is pretty substantial if the expectation is that Clang build bots will now also build the whole MLIR tree. I’ve been told that that would be a large impact. That doesn’t have to be the expectation, of course, as long as it can be configured out without removing core compiler features; of course, that would also mean that we can’t use ClangIR to implement any core compiler features, although I suspect that would be an overhead we’d struggle to justify incrementally anyway.
Short-term, my understanding is that this is only proposing moving the code into the source tree, under “Peripheral Tier” rules (LLVM Community Support Policy — LLVM 19.0.0git documentation). So nothing would change in terms of what is actually built by default, or what non-ClangIR developers are expected to maintain. In that respect, the question is basically just whether we think the experiment has shown it has potential. I think it’s met that bar; the basic infrastructure is working, people are finding it useful, and it’s consistent with the long-term goals of clang as a project.
There are open questions about various issues: I’d specifically call out:
- the time it takes to build LLVM/MLIR/clang if we start building it by default
- the impact on compile-time if we start constructing ClangIR by default
- avoiding duplicate “codegen” code
- how to represent target ABIs
But we don’t need to resolve all of those now.
Also, I think some people in this discussion are being overly ambitious about what’s actually practical to do with ClangIR on its own. ClangIR is not Polygeist; effectively translating from ClangIR to the MLIR’s loop-based dialects is probably more complicated than ClangIR itself.
Right now, it sounds like there isn’t a viable path over the next 2-3 years for this to become part of the main code-generation pipeline, so (respectfully but honestly) this is really just adding another static analysis IR with a hope of eventually serving as an alternate pipeline. We should therefore analyze it on those terms. It seems to me that the costs of integrating it really come down to CI overhead and maintenance burden.
Thanks for the feedback John. This is mostly right, I’m just gonna add that we do have a LLVM lowering story and that’s something that’s evolving faster than the lifetime checker bits (given most open source contributors are currently working on that).
The maintenance burden depends mostly on our expectations for contributors. ClangIRGen would be a new AST consumer that would have to be updated for AST changes, but the AST has overall gotten quite stable.
True, in the past few years we only had to deal with breakage a handful of times.
The CI overhead is pretty substantial if the expectation is that Clang build bots will now also build the whole MLIR tree. I’ve been told that that would be a large impact. That doesn’t have to be the expectation, of course, as long as it can be configured out without removing core compiler features; of course, that would also mean that we can’t use ClangIR to implement any core compiler features, although I suspect that would be an overhead we’d struggle to justify incrementally anyway.
This is an accurate assessment. There’s some expectations that our MLIR build time might be overstated, so the concern here might be less as well. There are also parties interested in using ClangIR earlier than the rest of upstream, so feasibility to the community as a whole isn’t always the driving force for feature development here.
Short-term, my understanding is that this is only proposing moving the code into the source tree, under “Peripheral Tier” rules (LLVM Community Support Policy — LLVM 19.0.0git documentation). So nothing would change in terms of what is actually built by default, or what non-ClangIR developers are expected to maintain. In that respect, the question is basically just whether we think the experiment has shown it has potential. I think it’s met that bar; the basic infrastructure is working, people are finding it useful, and it’s consistent with the long-term goals of clang as a project.
Thanks for the feedback Eli.
There are open questions about various issues: I’d specifically call out:
the time it takes to build LLVM/MLIR/clang if we start building it by default
the impact on compile-time if we start constructing ClangIR by default
avoiding duplicate “codegen” code
how to represent target ABIs
But we don’t need to resolve all of those now.
Indeed. Those are things we deeply care about and the last two are actually part of our shorter term goals: we have some obvious duplication we want to tackle early in the upstream story, and ABI representation is becoming a blocker for evolving LLVM lowering further.
ClangIR is not Polygeist; effectively translating from ClangIR to the MLIR’s loop-based dialects is probably more complicated than ClangIR itself.
We’re very excited about that potential topic! We’ve reached out to some of the engineers on the project about potential future integration on that front.
Discussion seems to have wound down, thank you to everyone for leaving comments!
There were some issues identified that should be considered while working on upstreaming this (such as compile time performance, duplication of codegen efforts, etc), but I do not see any blocking issues and so consensus appears to be that this RFC is now accepted.
Thank you for the RFC, looking forward to seeing where these efforts take us!
I assume you only have data for SingleSource
, because none of the MultiSource tests build and run successfully?
The impact on clang build times have already been discussed a bit, but I think it would be great to get a better understanding of the impact on the compile-time performance when using Clang with the CIR code path. Getting data for at least some of the CTMark subset should give a more robust signal and the programs used for CTMark are quite ‘small’ by current standards.
Just hypothetically if CIR increases compile-time or memory usage by 5x, then I’d be doubtful if there was a path towards replacing current CodeGen.
There were some issues identified that should be considered while working on upstreaming this (such as compile time performance, duplication of codegen efforts, etc), but I do not see any blocking issues and so consensus appears to be that this RFC is now accepted. Thank you for the RFC, looking forward to seeing where these efforts take us!
We’ll definitely look into these issues as we make progress.
Thank you for the RFC, looking forward to seeing where these efforts take us!
Thanks Aaron & everyone else!
Hi Florian,
I assume you only have data for SingleSource, because none of the MultiSource tests build and run successfully?
We haven’t started exploring MultiSource yet. Our focus has been on SingleSource and we want to complete it before moving on to other things. However, MultiSource is a logical next step for us to take in the future.
Getting data for at least some of the CTMark subset should give a more robust signal and the programs used for CTMark are quite ‘small’ by current standards. Just hypothetically if CIR increases compile-time or memory usage by 5x, then I’d be doubtful if there was a path towards replacing current CodeGen.
Yes, CTMark would be a great option. I just did a quick search and all the results point back to MultiSource, we’ll prioritize based on those when we start exploring other sources.
Hi Bruno, et al.
Thanks a lot for the RFC and the work behind ClangIR.
There are two aspects that I am curious about. I realize that’s not the question you are asking but more towards the ones that will come after:
- How much would be the performance overhead if we add AST->ClangIR->LLVM IR → Machine code? That’s important for the cases where that use Jit compilation and clang as a library.
- Do you have a plan for the longer term strategy in maintenance? Do we have a strong long term commitment (5+ years) in terms of manpower to maintain that part assuming that the community will be in a learning phase during that time?
I do not have a strong opinion on that proposal. I am trying to evaluate the technical but also the social aspects of the longer term vision here.
I just tried to use clangir on a real world project to get a feeling for clangir. Then it crashes and the respond shows clangir lacks a lot of features from original clang codegen: Documents for using CIR will be helpful · Issue #484 · llvm/clangir · GitHub
I don’t think this is a blocker for upstreaming clangir. But I feel it is worth sharing this here.
Thanks for sharing, we’re still working our way through singlesource benchmarks, the crash you found it totally expected at this point.
It looks like you’re starting to upstream ClangIR, great! But it looks like there are some unaddressed concerns, mainly about CIR to LLVM IR
part.
And I am wondering if we can benefit from CIR and workaround the concerns about CIR to LLVM IR
part. I think this will be helpful landing CIR.
In short, I think in the first few versions, we can avoid putting the CIR to LLVM IR
module and only contribute the CIRGen and CIR analysis module. That said, the pattern may look like:
Source -> Parser and Sema -> AST -----> CIR -----> analysis pass and transforming pass helper for analysis
|
------> CodeGen -------> LLVM IR
So we can have multiple consumers for the parsed AST. One the traditional code gen part. The other one is the new introduced CIR part. We generate CIR from AST and we can perform analysis on the CIR. We can also perform some transformation passes on the CIR if these transformation passes are helpful to the analysis. But we won’t perform optimization transformations since we will discard the CIR in the end of the pipeline.
Is the idea already discussed? If not, I feel this is pretty promising. It can address the maintaining concerns at least in the few years. Also it is relatively easy to implement this pattern. All you need is to add a MultiASTConsumer some where in the pipeline. If this idea is already discussed and you don’t think it is good, I’d like to hear why it is not good.
For the CIR to LLVM IR
part, I think we need it if we want to perform some language specific optimizations on the CIR (like the examples we give about coroutines). But given the current status of CIR, it may be a much far plan.
Hi @ChuanqiXu, thanks for your interest.
I don’t understand what you mean by unaddressed concerns, CIR to LLVM IR
part is within our original goals/design and is under heavy development in the past couple years.
… In short, I think in the first few versions, we can avoid putting the
CIR to LLVM IR
module and only contribute the CIRGen and CIR analysis module … I am wondering if we can benefit from CIR and workaround the concerns aboutCIR to LLVM IR
part. I think this will be helpful landing CIR.
I have no knowledge of such concerns. LLVM lowering is the best way to check correctness for CIR. Not to say that the internal products we’ve been working on also rely on it.
We generate CIR from AST and we can perform analysis on the CIR. We can also perform some transformation passes on the CIR if these transformation passes are helpful to the analysis. But we won’t perform optimization transformations since we will discard the CIR in the end of the pipeline.
You can already do that with the current pipelines implemented for CIR, some transformations are only applied in the path that lowers to LLVM/MLIR, the lifetime time checker for instance doesn’t use those - and it’s all also configurable by compiler flags. It’s totally reasonable for folks interested in lowering CIR to LLVM (or to MLIR dialects) to be paving those paths as we upstream work.
Is the idea already discussed? If not, I feel this is pretty promising.
We regularly discuss such topics in our monthly meeting. I suggest you join our next call to clarify your understanding of the project, we are happy to help you navigate it: MLIR C/C++ Frontend Working Group [Mon, Jun 3rd]
I’ll also kindly ask that any new topic be address in a different thread, this thread has already wrapped and serves a good historical context for the upstream side of things. Let’s avoid rehashing!
I don’t understand what you mean by unaddressed concerns,
CIR to LLVM IR
part is within our original goals/design and is under heavy development in the past couple years.
I mean the concern raised from @rjmccall @efriedma-quic. IIRC, I didn’t see the direct response to these concerns nor they said their concern got addressed.
You can already do that with the current pipelines implemented for CIR, some transformations are only applied in the path that lowers to LLVM/MLIR, the lifetime time checker for instance doesn’t use those - and it’s all also configurable by compiler flags. It’s totally reasonable for folks interested in lowering CIR to LLVM (or to MLIR dialects) to be paving those paths as we upstream work.
I am confused if I stated my technical points clearly. I mean, can we enable the CIR analysis and reusing the current CodeGen (AST → LLVM IR) pipeline? If yes, my problem solved. And it will be pretty helpful to share some links.
Also I feel you misunderstand my points somehow. I am not blocking upstreaming the CIR to LLVM IR
part. I am saying, at compile time can we enable the CIR analysis and reusing the current stable CodeGen part in some configuration.
It will be pretty helpful for add more practical users of CIR. For example, if I want to use CIR for some products, it will be a pretty big blocking issue if I have to enable the CIR to LLVM IR
part since the end users may have concerns about the correctness. But it may not be the case if we only add some analysis.
It doesn’t conflict with upstreaming the CIR to LLVM IR
part and your maintaining plan.
We regularly discuss such topics in our monthly meeting. I suggest you join our next call to clarify your understanding of the project, we are happy to help you navigate it: MLIR C/C++ Frontend Working Group [Mon, Jun 3rd]
Yeah, but I am not sure if I can attend promise due to time difference.
I’ll also kindly ask that any new topic be address in a different thread, this thread has already wrapped and serves a good historical context for the upstream side of things. Let’s avoid rehashing!
Yes. I just feel it belongs here since the topic is still about the concerns about CIR to LLVM IR
part.
The RFC discussion was specifically focused around the question “should we merge ClangIR into the LLVM tree”. The question was answered, so the thread is done.
Some of the points brought up in this thread are probably worth a followup discussion… but very long threads are hard to navigate. If there’s some specific comment that you want to reference, you can link to it from a new thread.
OK, that makes sense.
It looks better to discuss this in ClangIR’s repo: Do we have an analysis only pipeline (avoiding 'CIR to LLVM IR' part)? · Issue #633 · llvm/clangir · GitHub
Thanks for the clarifications, added a reply there.