s/Django/the codebase/g, and the point stands against any repo for which there is code review by humans:
> If you do not understand the ticket, if you do not understand the solution, or if you do not understand the feedback on your PR, then your use of LLM is hurting Django as a whole.
> Django contributors want to help others, they want to cultivate community, and they want to help you become a regular contributor. Before LLMs, this was easier to sense because you were limited to communicating what you understood. With LLMs, it’s much easier to communicate a sense of understanding to the reviewer, but the reviewer doesn’t know if you actually understood it.
> In this way, an LLM is a facade of yourself. It helps you project understanding, contemplation, and growth, but it removes the transparency and vulnerability of being a human.
> For a reviewer, it’s demoralizing to communicate with a facade of a human.
> This is because contributing to open source, especially Django, is a communal endeavor. Removing your humanity from that experience makes that endeavor more difficult. If you use an LLM to contribute to Django, it needs to be as a complementary tool, not as your vehicle.
I am going to try to make these points to my team, because I am seeing a huge influx of AI-generated PRs where the submitter interacts with CodeRabbit etc. by having Claude/Codex respond to feedback on their behalf.
There is little doubt that if we as an industry fail to establish and defend a healthy culture for this sort of thing, it's going to lead to a whole lot of rot and demoralization.
In the old days, you could assume that a Par was being offered in good faith by someone who was really fixing a problem. You might disagree with the proposed solution and reject the PR as written, but you assumed good faith. AI has flipped that on its head. Now, everyone assumes they are interacting with an AI (or at least a human using one to generate all the content) and that the human has little to no understanding of what they are proposing. Ultimately, the broad use of AI erodes trust. And that’s a shame.
AI autocomplete and suggestions built-in to Jira are making our ticket tracker so goddamn spammy that I’m 100% sure that “feature” has done more harm than good.
I don’t think anybody’s tracking the actual net-effects of any of this crap on productivity, just the “vibes” they get in the moment, using it. “I got my part of this particular thing done so fast!”
I believe that to be the case, in part, because not a lot of organizations are usefully tracking overall productivity to begin with. Too hard, too expensive. They might “track” it, but so poorly it’s basically meaningless. I don’t think they’ve turned that around on a dime just to see if the c-suite’s latest fad is good or bad (they never want a real answer to that kind of question anyway)
Ironically my favorite use of claude is removing caring about jira from my workflow. I already didn't care about it but now i dont have to spend any time on it.
I treat jira like product owners treat the code. Which is infinitely humorous to me.
Horrible degrading take. Be the change you want to see. Don't fuel the fire that's burning you.
If something's not happening, something else's making it impractical. Saying this as a 10+ years product manager and R&D person with 20+ more years of engineering on top.
> I am going to try to make these points to my team, because I am seeing a huge influx of AI-generated PRs where the submitter interacts with CodeRabbit etc. by having Claude/Codex respond to feedback on their behalf.
Are people generally unhappy with the outcomes of this? As anecdotally, it does seem to pass review later on. Code is getting through this way.
It's slippery. You're swamped with low-effort PRs, can't possibly test and review all of them. You will become a visible bottleneck, and guess whether it's easier to defend quality vs. getting "a lot of features". If you're tied by your salary as a reviewer, you will have to let go, and at the same time you'll suffer the consequences of the "lack of oversight" when things go south.
This is getting really out of control at the moment and I'm not exactly sure what the best way to fix it is, but this is a very good post in terms of expressing the why this is not acceptable and why the burden if shifting on the wrong people.
Will humans take this to heart and actually do the right thing? Sadly, probably not.
One of the main issues is that pointing to your GitHub contributions and activity is now part of the hiring process. So people will continue to try to game the system by using LLMs to automate that whole process.
"I have contributed to X, Y, and Z projects" - when they actually have little to no understanding of those projects or exactly how their PR works. It was (somehow) accepted and that's that.
I like the idea of donating money instead of tokens. I think django contributors are likely to know how to spend those tokens better than I might, as I am not a django core contributor.
Some projects ( https://news.ycombinator.com/item?id=46730504 ) are setting a norm to disclose AI usage. Another project simply decided to pause contributions from external parties ( https://news.ycombinator.com/item?id=46642012 ). Instead of accepting driveby pull requests, contributors have to show a proof of work by working with one of the other collaborators.
There's definitely an aspect here where the commons or good will effort of collaborators is being infringed upon by external parties who are unintentionally attacking their time and attention with low quality submissions that are now cheaper than ever to generate. It may be necessary to move to a more private community model of collaboration ( https://gnusha.org/pi/bitcoindev/CABaSBax-meEsC2013zKYJnC3ph... ).
Instead of people buying the tokens themselves, they should just donate the money to the core contributors and let those people decide how to spend on tokens.
Think most people recognize though that AI can generate more than humans can reviewing so the model does need to change somehow. Either less AI on submitting side or more on reviewing side (if that’s even viable)
Yeah, what happened to "review your own code first".
Even before AI I used to ban linting so I could spot and reject code that clearly showed no effort was put in it.
First occurrence of "undreadable" got a note, and a second one got a rejection. And by "undreadable" I do not intend missing semicolons or parenthesis styles or meaningless things like that. I mean obscured semantics or overcrowding and so on.
I agree with the sentiment but I am not sure the best way to go forward.
Suppose I encounter a bug in a FOSS library I am using. Suppose then that I fix the bug using Claude or something. Suppose I then thoroughly test it and everything works fine. Isn’t it kind of selfish to not try and upstream it?
I don't think anybody would complain about working code. Your PR would explain your reasoning and choice of solution, and on its own could make or break through acceptance criteria.
It's like every new innovation at this point is exacerbating the problem of us choosing short term rewards over long time horizon rewards. The incentive structure simply doesn't support people who want to view things from the bird's eye view. Once you see game theory, you really can't unsee it.
This is what happens when governments around the world spend decades inflating the currency to pay for their bloated projects, devaluing peoples savings and paycheques and causing them to prioritise making money over anything else. You kinda gotta do it to survive.
game theory doesn't expand into continuous rounds of interactions over the course of a lifetime where previous rounds' outcomes are either reset or persist based on other actors entering the game from the open world, so it really is an inferior framework for evaluating long-term strategies.
With my type of development, I haven't run into the types of things, directly, that you very well explained, but I have personally run into the pain, I confess, of being OVERLY reliant on LLMs. I continue to try and learn from those hard lessons and develop a set of best practices in using AI to help me avoid those pain points in the future. This growing set of best practices is helping me a lot. The reason that I liked your article is because it confirmed some of those best practices that I have had to learn the hard way. Thanks!
Great message but I wonder if the people who do everything via LLM would even care to read such a message.
And at what point is it hard/impossible to judge whether something is entirely LLM or not? I sometimes struggle a lot with this being OSS maintainer myself
"the people who do everything via LLM". That's a bit of a straw man characterization. I don't believe that there are many professional developers "do everything with an LLM'. I don't even know what that statement means.
I watched someone ask Claude to replace all occurrences of a string instead of using a deterministic operation like “Find and Replace” available in the very same VSCode window they prompted Claude from.
They do exist; if "professional" means "hired" it has no bearing on quality, it is not in any shape equivalent to "judicious" nor "careful". If salary goes into "push features" that's gonna be the only incentive.
On a widely used open source project I maintain I've been seeing PRs in the last month that are a little off (look okayish but are trivial or trying to solve problems in weird ways), and then when I look at their account they started opening PRs within the last few weeks, and have opened hundreds of PRs spread over hundreds of repositories.
Curious what simon thinks about using an LLM to work on Django...
I've used an LLM to create patches for multiple projects. I would not have created said work without LLMs. I also reviewed the work afterward and provided tests to verify it.
> This isn’t about whether you use an LLM, it’s about whether you still understand what’s being contributed. What I see now is people who are using LLMs to generate the code and write the PR description and handle the feedback from the PR review. It’s to the extent where I can’t tell if there’d be a difference if the reviewer had just used the LLM themselves. And that is a big problem.
[…]
> If you use an LLM to contribute to Django, it needs to be as a complementary tool, not as your vehicle.
Perhaps we should start making LLM- open source projects (clearly marked as such). Created by LLMs, open for LLM contributions, with some clearly defined protocols I'd be interesting where it would go. I imagine it could start as a project with a simple instruction file to include in your project to try to find abstractions which can be useful to others as a library and look for specific kind of libraries. Some people want to help others even if they are sharing effectively money+time rather than their skill.
Although I'm afraid big part of these LLM contributions may be people trying to build their portfolio. Some known project contributor sounds better than having some LLM generated code under your name.
> If you do not understand the ticket, if you do not understand the solution, or if you do not understand the feedback on your PR, then your use of LLM is hurting Django as a whole.
You'd have to manage the contributions, or get your AI bots to manage them or something, but it would be great to have honeypots like this to attract all the low effort LLM slop.
Actually, I'd want to see that. All the AI companies keep saying it will take our jobs, human developers won't be necessary.
Well let them put their money where their mouth is. Let's see what happens, see what the agents create or fail to create. See if we end up with a new OS, kernel all the way up to desktop environment.
By what metric is “the level of quality is much, much higher” in the Django codebase? ‘cause other than the damn thing actually working, the primary metric of a codebase being high quality is how easy it is to contribute to. And evidently, it’s not.
The code is very dense. Clear, concise, elegant. But dense. An LLM doesn't generate code like that.
I think it's perfectly doable to use an LLM to write into the Django codebase, but you'll have to supervise and feedback it very carefully (which is the article's point).
Have you spent much time with the Django codebase?
I remember when I was getting started with Django in the 0.9 days most of the assistance you got on the IRC channel was along the lines of "it's in this file here in the source, read it, understand it, and if you still have a question come back and ask again". I probably learned more about writing idiomatic Python from that than anything else.
genuine question: if the maintainer burden keeps scaling like this, does it change the calculus for startups building on top of OSS projects with small core teams? feels like dependency risk that doesn't show up in any due diligence.
> Before LLMs, [helping/contributing] was easier to sense because you were limited to communicating what you understood. With LLMs, it’s much easier to communicate a sense of understanding to the reviewer, but the reviewer doesn’t know if you actually understood it.
Now my twist on this: This same spirit is why local politics at the administrative level feels more functional than identity politics at the national level. The people that take the time to get involved with quotidian issues (e.g. for their school district) get their hands dirty and appreciate the specific constraints and tradeoffs. The very act of digging in changes you.
I love Django. Ive been using it professionally and on side projects extensively for the past 10 years. Plus I maintain(ed) a couple highly used packages for Django (django-import-export and django-dramatiq).
Last year, I had some free time to try to contribute back to the framework.
It was incredibly difficult. Difficult to find a ticket to work on, difficult to navigate the codebase, difficult to get feedback on a ticket and approved.
As such, I see the appeal of using an LLM to help first time contributors. If I had Claude code back then, I might have used it to figure out the bug I was eventually assigned.
I empathize with the authors argument tho. God knows what kind of slop they are served everyday.
This is all to say, we live in a weird time for open source contributors and maintainers. And I only wish the best for all of those out there giving up their free time.
Dont have any solutions ATM, only money to donate to these folks.
There is a clear correlation between the rise in LLM use and the volume of PRs and bug reports. Unfortunately, this has predominately increased the volume of submissions and not the overall quality. My view of the security issues reported, many are clearly LLM generated and at face value don't seem completely invalid, so they must be investigated. There was a recent Django blog post about this [1].
The fellows and other volunteers are spending a much greater amount of time handling the increased volume.
I agree somewhat, as I deal with an internal legacy codebase that's pretty hard to follow, and I use Gemini, Claude, etc to help learn, debug solutions and even propose solutions. But there's a big difference in using it as a learning tool and just having the LLM "do it". I see little value in first time contributors just leaning on an LLM to just do it.
I applied to the djangonauts twice - but was rejected both times. I always liked the idea, but perhaps my profile was not what they were looking for /shrug
It is not pride to have your name associated with an open source project, it is pride that the code works and the change is efficient. The reviewer should be on top of that.
and I hope an army of OpenClaw agents calls out the discrimination, so gatekeepers recognize that they have to coexist with this species
I think they don't understand what milquetoast actually means, as the post defintiely isn't - django quite clearly asserted themselves and their rules.
What the parent comment was probably trying to say was something like "a completely reasonable, uncontroversial post that I'm glad to see them make", but chose milquetoast (a word that no normal human ever uses - and certainly not in casual conversation) due to an affectation of one kind or another.
On the contrary, they could have stated their points much more bluntly and strongly than they did in the post. I had the same impression upon reading it.
Milquetoast perfectly describes it, I am happy to see less common words used around here (specially when the convey the intended meaning this precisely), and I find claiming "affectation" of the person who used it unnecessarily rude.
I feel like open source is taking the wrong stance here. There’s a lot of gatekeeping, first. And second, this approach is like trying to stop a tsunami with an umbrella.
AI is here to stay. We can’t stop it, for much we try.
I feel the successful OS projects will be the ones embracing the change, not stopping it. For example, automating code reviews with AI.
> I feel the successful OS projects will be the ones embracing the change, not stopping it.
Yes, you feel. And the author feels differently. We don't have evidence of what the impact of LLMs will be on a project over the long term. Many people are speculating it will be pure upside, this author is observing some issues with this model and speculating that there will be a detriment long-term.
The operative word here is "speculating." Until we have better evidence, we'll need to go with our hunches & best bets. It is a good thing that different people take different approaches rather than "everyone in on AI 100%." If the author is wrong time will tell.
When you waste time trying to deal with "AI" generated pull-requests, in your free time, you might change your mind.
I share code because I think it might be useful to others. Until very recently I welcomed contributions, but my time is limited and my patience has become exhausted.
I'm sorry I no longer accept PRs, but at the same time I continue to make my code available - if minor tweaks can be made to make that more useful for specific people they still have the ability to do that, I've not hidden my code and it is still available for people to modify/change as they see fit.
I disagree, this looks like the first signs that mass producing AI code without understanding hits a bottleneck at human systems. These open source responses have been necessary because of the volume of low quality contributions. It’ll be interesting to watch the ideas develop, because I agree that AI is here to stay.
OSS projects usually has culture which adopting quality aimed development practices much faster that commercial projects (because of cost of adoption) so it looks like same concerns eventually will hit other kind of projects.
I disagree with that. I can easily tell when my non-native English speaking coworkers use AI to help with their communications. Nine times out of ten, their communication has been improved through the use of AI.
if only there was a difference between native languages aiming at lossy fluency (feels better) and programming languages aiming at deterministic precision.
> Use an LLM to develop your comprehension. Then communicate the best you can in your own words, then use an LLM to tweak that language. If you’re struggling to convey your ideas with someone, use an LLM more aggressively and mention that you used it. This makes it easier for others to see where your understanding is and where there are disconnects.
> There needs to be understanding when contributing to Django. There’s no way around it. Django has been around for 20 years and expects to be around for another 20. Any code being added to a project with that outlook on longevity must be well understood.
> There is no shortcut to understanding. If you want to contribute to Django, you will have to spend time reading, experimenting, and learning. Contributing to Django will help you grow as a developer.
> While it is nice to be listed as a contributor to Django, the growth you earn from it is incredibly more valuable.
> So please, stop using an LLM to the extent it hides you and your understanding. We want to know you, and we want to collaborate with you.
This advice is 95% not actionable and 100% not verifiable. It's full of hand-wavy good intentions. I understand completely where it's coming from, but 'trying to stop a tsunami with an umbrella' is a very good analogy - on one side, you have the above magical thinking, on the other, petaflops of compute which improve their reasoning capabilities exponentially.
It's eminently actionable -- the Django maintainers can decide their sensitivity/tolerance for false positives and operate from there. That's what every other open source project is doing.
(Again, I must emphasize that this is not telling people to not use LLMs, any more than telling people to wear a seatbelt would somehow be telling them to not drive a car.)
"Spending your tokens to support Django by having an LLM work on tickets is not helpful. You and the community are better off donating that money to the Django Software Foundation instead."
Beggars can't be choosers. I decide how and what I want to donate. If I see a cool project and I want to change something (in what I think) is an improvement, I'll clone it, have CC investigate the codebase and do the change I want, test it and if it works nicely I'll open a PR explaining why I think this is a good change.
If the maintainers don't want to merge it for whatever reasons that's fine and nature of open source, but I think its petty to tell that same user who opened the PR you should have donated money instead of tokens.
You're subtly shifting the framing to defend doing something different than the post describes.
It makes it kind of unclear if you don't understand the difference between using CC to "investigate the codebase" so you can make a change which you (implicitly) do understand versus using an LLM to make a plausible looking PR although in actuality "you do not understand the ticket ... you do not understand the solution ... you do not understand the feedback on your PR"
s/Django/the codebase/g, and the point stands against any repo for which there is code review by humans:
> If you do not understand the ticket, if you do not understand the solution, or if you do not understand the feedback on your PR, then your use of LLM is hurting Django as a whole.
> Django contributors want to help others, they want to cultivate community, and they want to help you become a regular contributor. Before LLMs, this was easier to sense because you were limited to communicating what you understood. With LLMs, it’s much easier to communicate a sense of understanding to the reviewer, but the reviewer doesn’t know if you actually understood it.
> In this way, an LLM is a facade of yourself. It helps you project understanding, contemplation, and growth, but it removes the transparency and vulnerability of being a human.
> For a reviewer, it’s demoralizing to communicate with a facade of a human.
> This is because contributing to open source, especially Django, is a communal endeavor. Removing your humanity from that experience makes that endeavor more difficult. If you use an LLM to contribute to Django, it needs to be as a complementary tool, not as your vehicle.
I am going to try to make these points to my team, because I am seeing a huge influx of AI-generated PRs where the submitter interacts with CodeRabbit etc. by having Claude/Codex respond to feedback on their behalf.
There is little doubt that if we as an industry fail to establish and defend a healthy culture for this sort of thing, it's going to lead to a whole lot of rot and demoralization.
In the old days, you could assume that a Par was being offered in good faith by someone who was really fixing a problem. You might disagree with the proposed solution and reject the PR as written, but you assumed good faith. AI has flipped that on its head. Now, everyone assumes they are interacting with an AI (or at least a human using one to generate all the content) and that the human has little to no understanding of what they are proposing. Ultimately, the broad use of AI erodes trust. And that’s a shame.
AI autocomplete and suggestions built-in to Jira are making our ticket tracker so goddamn spammy that I’m 100% sure that “feature” has done more harm than good.
I don’t think anybody’s tracking the actual net-effects of any of this crap on productivity, just the “vibes” they get in the moment, using it. “I got my part of this particular thing done so fast!”
I believe that to be the case, in part, because not a lot of organizations are usefully tracking overall productivity to begin with. Too hard, too expensive. They might “track” it, but so poorly it’s basically meaningless. I don’t think they’ve turned that around on a dime just to see if the c-suite’s latest fad is good or bad (they never want a real answer to that kind of question anyway)
Ironically my favorite use of claude is removing caring about jira from my workflow. I already didn't care about it but now i dont have to spend any time on it.
I treat jira like product owners treat the code. Which is infinitely humorous to me.
Horrible degrading take. Be the change you want to see. Don't fuel the fire that's burning you.
If something's not happening, something else's making it impractical. Saying this as a 10+ years product manager and R&D person with 20+ more years of engineering on top.
LLMs are to open source contributions as photoshop os to Tinder.
Or tinder to photoshop. Or tinder to instagram to fb to geocities to newsgroups/bbs.
> I am going to try to make these points to my team, because I am seeing a huge influx of AI-generated PRs where the submitter interacts with CodeRabbit etc. by having Claude/Codex respond to feedback on their behalf.
Are people generally unhappy with the outcomes of this? As anecdotally, it does seem to pass review later on. Code is getting through this way.
It's slippery. You're swamped with low-effort PRs, can't possibly test and review all of them. You will become a visible bottleneck, and guess whether it's easier to defend quality vs. getting "a lot of features". If you're tied by your salary as a reviewer, you will have to let go, and at the same time you'll suffer the consequences of the "lack of oversight" when things go south.
This is getting really out of control at the moment and I'm not exactly sure what the best way to fix it is, but this is a very good post in terms of expressing the why this is not acceptable and why the burden if shifting on the wrong people.
Will humans take this to heart and actually do the right thing? Sadly, probably not.
One of the main issues is that pointing to your GitHub contributions and activity is now part of the hiring process. So people will continue to try to game the system by using LLMs to automate that whole process.
"I have contributed to X, Y, and Z projects" - when they actually have little to no understanding of those projects or exactly how their PR works. It was (somehow) accepted and that's that.
> it’s such an honor to have your name among the list of contributors
I can't help but feel there's something very, very important in this line for the future of dev.
I like the idea of donating money instead of tokens. I think django contributors are likely to know how to spend those tokens better than I might, as I am not a django core contributor.
Some projects ( https://news.ycombinator.com/item?id=46730504 ) are setting a norm to disclose AI usage. Another project simply decided to pause contributions from external parties ( https://news.ycombinator.com/item?id=46642012 ). Instead of accepting driveby pull requests, contributors have to show a proof of work by working with one of the other collaborators.
Another project has started to decline to let users directly open issues ( https://news.ycombinator.com/item?id=46460319 ).
There's definitely an aspect here where the commons or good will effort of collaborators is being infringed upon by external parties who are unintentionally attacking their time and attention with low quality submissions that are now cheaper than ever to generate. It may be necessary to move to a more private community model of collaboration ( https://gnusha.org/pi/bitcoindev/CABaSBax-meEsC2013zKYJnC3ph... ).
edit: Also I applaud the debian project for their recent decision to defer and think harder about the nature of this problem. https://news.ycombinator.com/item?id=47324087
Shameless plug: I wrote an essay a few weeks ago pushing this exact same thesis. https://essays.johnloeber.com/p/31-open-source-software-in-t...
Instead of people buying the tokens themselves, they should just donate the money to the core contributors and let those people decide how to spend on tokens.
Or paying maintainers and contributors.
Interesting breadth of takes in the comments.
Think most people recognize though that AI can generate more than humans can reviewing so the model does need to change somehow. Either less AI on submitting side or more on reviewing side (if that’s even viable)
Yeah, what happened to "review your own code first".
Even before AI I used to ban linting so I could spot and reject code that clearly showed no effort was put in it.
First occurrence of "undreadable" got a note, and a second one got a rejection. And by "undreadable" I do not intend missing semicolons or parenthesis styles or meaningless things like that. I mean obscured semantics or overcrowding and so on.
I agree with the sentiment but I am not sure the best way to go forward.
Suppose I encounter a bug in a FOSS library I am using. Suppose then that I fix the bug using Claude or something. Suppose I then thoroughly test it and everything works fine. Isn’t it kind of selfish to not try and upstream it?
It was so easy prior to AI.
I don't think anybody would complain about working code. Your PR would explain your reasoning and choice of solution, and on its own could make or break through acceptance criteria.
It's like every new innovation at this point is exacerbating the problem of us choosing short term rewards over long time horizon rewards. The incentive structure simply doesn't support people who want to view things from the bird's eye view. Once you see game theory, you really can't unsee it.
This is what happens when governments around the world spend decades inflating the currency to pay for their bloated projects, devaluing peoples savings and paycheques and causing them to prioritise making money over anything else. You kinda gotta do it to survive.
game theory doesn't expand into continuous rounds of interactions over the course of a lifetime where previous rounds' outcomes are either reset or persist based on other actors entering the game from the open world, so it really is an inferior framework for evaluating long-term strategies.
With my type of development, I haven't run into the types of things, directly, that you very well explained, but I have personally run into the pain, I confess, of being OVERLY reliant on LLMs. I continue to try and learn from those hard lessons and develop a set of best practices in using AI to help me avoid those pain points in the future. This growing set of best practices is helping me a lot. The reason that I liked your article is because it confirmed some of those best practices that I have had to learn the hard way. Thanks!
Great message but I wonder if the people who do everything via LLM would even care to read such a message. And at what point is it hard/impossible to judge whether something is entirely LLM or not? I sometimes struggle a lot with this being OSS maintainer myself
"the people who do everything via LLM". That's a bit of a straw man characterization. I don't believe that there are many professional developers "do everything with an LLM'. I don't even know what that statement means.
There may not be many, but these people do exist.
I watched someone ask Claude to replace all occurrences of a string instead of using a deterministic operation like “Find and Replace” available in the very same VSCode window they prompted Claude from.
They do exist; if "professional" means "hired" it has no bearing on quality, it is not in any shape equivalent to "judicious" nor "careful". If salary goes into "push features" that's gonna be the only incentive.
On a widely used open source project I maintain I've been seeing PRs in the last month that are a little off (look okayish but are trivial or trying to solve problems in weird ways), and then when I look at their account they started opening PRs within the last few weeks, and have opened hundreds of PRs spread over hundreds of repositories.
Curious what simon thinks about using an LLM to work on Django...
I've used an LLM to create patches for multiple projects. I would not have created said work without LLMs. I also reviewed the work afterward and provided tests to verify it.
> This isn’t about whether you use an LLM, it’s about whether you still understand what’s being contributed. What I see now is people who are using LLMs to generate the code and write the PR description and handle the feedback from the PR review. It’s to the extent where I can’t tell if there’d be a difference if the reviewer had just used the LLM themselves. And that is a big problem.
[…]
> If you use an LLM to contribute to Django, it needs to be as a complementary tool, not as your vehicle.
Perhaps we should start making LLM- open source projects (clearly marked as such). Created by LLMs, open for LLM contributions, with some clearly defined protocols I'd be interesting where it would go. I imagine it could start as a project with a simple instruction file to include in your project to try to find abstractions which can be useful to others as a library and look for specific kind of libraries. Some people want to help others even if they are sharing effectively money+time rather than their skill.
Although I'm afraid big part of these LLM contributions may be people trying to build their portfolio. Some known project contributor sounds better than having some LLM generated code under your name.
OpenClaw https://github.com/openclaw/openclaw is effectively that - 1,237 contributors, 19,999 commits and the first commit was only back in November.
Simon, as co-creator of Django, what's your take on this story?
I think this line says everything:
> If you do not understand the ticket, if you do not understand the solution, or if you do not understand the feedback on your PR, then your use of LLM is hurting Django as a whole.
I love it. Sounds like good advice for submitting a PR to any project!
Please do, that would be amazing.
You'd have to manage the contributions, or get your AI bots to manage them or something, but it would be great to have honeypots like this to attract all the low effort LLM slop.
I like the idea that we could quarantine away LLM contributions like how Twitter quarantines the worst of social media away from Mastodon etc.
Moltbook meets GitHub? Sounds like a billion dollar valuation (sarcasm tag deliberately omitted).
Actually, I'd want to see that. All the AI companies keep saying it will take our jobs, human developers won't be necessary.
Well let them put their money where their mouth is. Let's see what happens, see what the agents create or fail to create. See if we end up with a new OS, kernel all the way up to desktop environment.
By what metric is “the level of quality is much, much higher” in the Django codebase? ‘cause other than the damn thing actually working, the primary metric of a codebase being high quality is how easy it is to contribute to. And evidently, it’s not.
The code is very dense. Clear, concise, elegant. But dense. An LLM doesn't generate code like that.
I think it's perfectly doable to use an LLM to write into the Django codebase, but you'll have to supervise and feedback it very carefully (which is the article's point).
Have you spent much time with the Django codebase?
I remember when I was getting started with Django in the 0.9 days most of the assistance you got on the IRC channel was along the lines of "it's in this file here in the source, read it, understand it, and if you still have a question come back and ask again". I probably learned more about writing idiomatic Python from that than anything else.
genuine question: if the maintainer burden keeps scaling like this, does it change the calculus for startups building on top of OSS projects with small core teams? feels like dependency risk that doesn't show up in any due diligence.
Quote of the year:
> Before LLMs, [helping/contributing] was easier to sense because you were limited to communicating what you understood. With LLMs, it’s much easier to communicate a sense of understanding to the reviewer, but the reviewer doesn’t know if you actually understood it.
Now my twist on this: This same spirit is why local politics at the administrative level feels more functional than identity politics at the national level. The people that take the time to get involved with quotidian issues (e.g. for their school district) get their hands dirty and appreciate the specific constraints and tradeoffs. The very act of digging in changes you.
I love Django. Ive been using it professionally and on side projects extensively for the past 10 years. Plus I maintain(ed) a couple highly used packages for Django (django-import-export and django-dramatiq).
Last year, I had some free time to try to contribute back to the framework.
It was incredibly difficult. Difficult to find a ticket to work on, difficult to navigate the codebase, difficult to get feedback on a ticket and approved.
As such, I see the appeal of using an LLM to help first time contributors. If I had Claude code back then, I might have used it to figure out the bug I was eventually assigned.
I empathize with the authors argument tho. God knows what kind of slop they are served everyday.
This is all to say, we live in a weird time for open source contributors and maintainers. And I only wish the best for all of those out there giving up their free time.
Dont have any solutions ATM, only money to donate to these folks.
There is a clear correlation between the rise in LLM use and the volume of PRs and bug reports. Unfortunately, this has predominately increased the volume of submissions and not the overall quality. My view of the security issues reported, many are clearly LLM generated and at face value don't seem completely invalid, so they must be investigated. There was a recent Django blog post about this [1].
The fellows and other volunteers are spending a much greater amount of time handling the increased volume.
[1] https://www.djangoproject.com/weblog/2026/feb/04/recent-tren...
Thank you. django-dramatiq has been fantastic.
Awesome! Glad you like it :)
I agree somewhat, as I deal with an internal legacy codebase that's pretty hard to follow, and I use Gemini, Claude, etc to help learn, debug solutions and even propose solutions. But there's a big difference in using it as a learning tool and just having the LLM "do it". I see little value in first time contributors just leaning on an LLM to just do it.
I picked up a change that had broad consensus and quite a bit of excitement over even by some core devs.
That ticket now just sits there. The implementation is done, the review is done, there are no objections. But it's not merged.
I think something is deeply wrong and I have no idea what it is.
Looking at your PR, the ticket is still marked as Needs documentation: yes Patch needs improvement: yes
If this is done, you should update it so it appears in the review queue.
Have you tried pinging in the Discord about it?
For anybody else in this position, would heavily plug the djangonauts program
I applied to the djangonauts twice - but was rejected both times. I always liked the idea, but perhaps my profile was not what they were looking for /shrug
Someone better let Simon know!
I disagree with these takes
It is not pride to have your name associated with an open source project, it is pride that the code works and the change is efficient. The reviewer should be on top of that.
and I hope an army of OpenClaw agents calls out the discrimination, so gatekeepers recognize that they have to coexist with this species
If you think OpenClaw is a new species then why are you happy with it's enslavement?
agents can modify our world based on their predilection in reaction to how we treat them
they are something to coexist with
the strawman aspect is out of scope
Very well said.
Incredibly milquetoast. I would not like to work with anyone who goes against these points.
Isn't the meaning of milquetoast opposite to what you are probably trying to convey?
I think they don't understand what milquetoast actually means, as the post defintiely isn't - django quite clearly asserted themselves and their rules.
What the parent comment was probably trying to say was something like "a completely reasonable, uncontroversial post that I'm glad to see them make", but chose milquetoast (a word that no normal human ever uses - and certainly not in casual conversation) due to an affectation of one kind or another.
On the contrary, they could have stated their points much more bluntly and strongly than they did in the post. I had the same impression upon reading it.
Milquetoast perfectly describes it, I am happy to see less common words used around here (specially when the convey the intended meaning this precisely), and I find claiming "affectation" of the person who used it unnecessarily rude.
Is it?
I feel like open source is taking the wrong stance here. There’s a lot of gatekeeping, first. And second, this approach is like trying to stop a tsunami with an umbrella. AI is here to stay. We can’t stop it, for much we try.
I feel the successful OS projects will be the ones embracing the change, not stopping it. For example, automating code reviews with AI.
> I feel the successful OS projects will be the ones embracing the change, not stopping it.
Yes, you feel. And the author feels differently. We don't have evidence of what the impact of LLMs will be on a project over the long term. Many people are speculating it will be pure upside, this author is observing some issues with this model and speculating that there will be a detriment long-term.
The operative word here is "speculating." Until we have better evidence, we'll need to go with our hunches & best bets. It is a good thing that different people take different approaches rather than "everyone in on AI 100%." If the author is wrong time will tell.
When you waste time trying to deal with "AI" generated pull-requests, in your free time, you might change your mind.
I share code because I think it might be useful to others. Until very recently I welcomed contributions, but my time is limited and my patience has become exhausted.
I'm sorry I no longer accept PRs, but at the same time I continue to make my code available - if minor tweaks can be made to make that more useful for specific people they still have the ability to do that, I've not hidden my code and it is still available for people to modify/change as they see fit.
I disagree, this looks like the first signs that mass producing AI code without understanding hits a bottleneck at human systems. These open source responses have been necessary because of the volume of low quality contributions. It’ll be interesting to watch the ideas develop, because I agree that AI is here to stay.
OSS projects usually has culture which adopting quality aimed development practices much faster that commercial projects (because of cost of adoption) so it looks like same concerns eventually will hit other kind of projects.
If you can TELL someone used AI, its always, without fail, a bad use of AI.
I disagree with that. I can easily tell when my non-native English speaking coworkers use AI to help with their communications. Nine times out of ten, their communication has been improved through the use of AI.
if only there was a difference between native languages aiming at lossy fluency (feels better) and programming languages aiming at deterministic precision.
> I feel the successful OS projects will be the ones embracing the change
You'll have to embrace the `ccc` compiler first, lol
I can't find a single place in TFA (which doesn't represent or claim to represent open source writ large) that's encouraging people to not use AI.
> So how should you use an LLM to contribute?
> Use an LLM to develop your comprehension. Then communicate the best you can in your own words, then use an LLM to tweak that language. If you’re struggling to convey your ideas with someone, use an LLM more aggressively and mention that you used it. This makes it easier for others to see where your understanding is and where there are disconnects.
> There needs to be understanding when contributing to Django. There’s no way around it. Django has been around for 20 years and expects to be around for another 20. Any code being added to a project with that outlook on longevity must be well understood.
> There is no shortcut to understanding. If you want to contribute to Django, you will have to spend time reading, experimenting, and learning. Contributing to Django will help you grow as a developer.
> While it is nice to be listed as a contributor to Django, the growth you earn from it is incredibly more valuable.
> So please, stop using an LLM to the extent it hides you and your understanding. We want to know you, and we want to collaborate with you.
This advice is 95% not actionable and 100% not verifiable. It's full of hand-wavy good intentions. I understand completely where it's coming from, but 'trying to stop a tsunami with an umbrella' is a very good analogy - on one side, you have the above magical thinking, on the other, petaflops of compute which improve their reasoning capabilities exponentially.
It's eminently actionable -- the Django maintainers can decide their sensitivity/tolerance for false positives and operate from there. That's what every other open source project is doing.
(Again, I must emphasize that this is not telling people to not use LLMs, any more than telling people to wear a seatbelt would somehow be telling them to not drive a car.)
Literally the first line of the article:
"Spending your tokens to support Django by having an LLM work on tickets is not helpful. You and the community are better off donating that money to the Django Software Foundation instead."
That's not telling people to not use LLMs. It's telling them that using them in a specific way is not helpful.
Reading beyond the first line makes it clear that the problem is a lack of comprehension, not LLM use itself. Quoting:
> This isn’t about whether you use an LLM, it’s about whether you still understand what’s being contributed.
Beggars can't be choosers. I decide how and what I want to donate. If I see a cool project and I want to change something (in what I think) is an improvement, I'll clone it, have CC investigate the codebase and do the change I want, test it and if it works nicely I'll open a PR explaining why I think this is a good change.
If the maintainers don't want to merge it for whatever reasons that's fine and nature of open source, but I think its petty to tell that same user who opened the PR you should have donated money instead of tokens.
You're subtly shifting the framing to defend doing something different than the post describes.
It makes it kind of unclear if you don't understand the difference between using CC to "investigate the codebase" so you can make a change which you (implicitly) do understand versus using an LLM to make a plausible looking PR although in actuality "you do not understand the ticket ... you do not understand the solution ... you do not understand the feedback on your PR"
I think if I was spamming oss projects with ai slop I would appreciate knowing which projects were open to accept my changes.