I’ve posted the thread on Lobsters and received many insightful comments.
https://lobste.rs/c/xzh55g argues against requiring AI disclosure because it creates a mental shift where:
- Less ownership: “This isn’t really my code, it’s AI code”
- Less pride in success: “The AI did the hard work, so my good code isn’t really an achievement”
- Less shame in failure: “The AI messed up, not me”
- Shifting responsibility: “Since I disclosed it’s AI code, the reviewers should fix the AI’s mistakes”
https://lobste.rs/c/lmes67 mentions that any AI benefit is quickly outweighed by the practical consequences
- Contributors take (get their code reviewed/merged)
- But don’t give back (learning, future quality contributions, eventual maintainership)
- Larger named projects get disproportionately targeted with low-effort submissions
The commenter sees no real pros to AI contributions - only weak arguments like “it’s inevitable” or “banning feels mean.”
They argue AI advocates exploit the politeness of maintainers, forcing low-quality code into projects while hiding behind the excuse that “a human pressed the button.”
https://lobste.rs/c/os1swf and https://lobste.rs/c/95xec5 suggest that we should ban AI contributions from newcomers
“AI generated code is not accepted from people with under N human-generated commits.”
This is an angry rant about a newcomer submitting a large AI-generated PR to LLVM without understanding the code, then creates a massive reviewer time drain (requiring 100+ comments).
The commenter is furious that someone with no expertise expects seasoned maintainers to “hold their hand” while they spam AI output, calling it arrogant and disrespectful to decades of expert work.
They blame “toxic vibe code culture” for making maintainers feel guilty about rejecting such contributions.
LLVM’s policies prioritize encouraging submissions over protecting reviewer bandwidth. Most LLVM reviewers aren’t paid to do reviews-they’re volunteering their expertise during evenings and weekends, or squeezing reviews between their day job responsibilities.
We need reviewer-time-conscious policies that recognize expert hours as LLVM’s scarcest resource. Just as captcha systems filter web spam, we need mechanisms to filter extractive contributions that consume disproportionate review effort.
I favor a no-AI-by-default policy, allowing only lightweight uses like code completion and grammar fixes-though defining these exceptions would be challenging.
Without protecting reviewer time, LLVM risks burning out the volunteer experts whose unpaid hours sustain the project, all to appear “welcoming” to contributors who may never become quality contributors themselves.