Seems about right for me (older developer at a big tech company). But we need to define what it means that the code is AI-generated. In my case, I typically know how I want the code to look like, and I'm writing a prompt to tell the agent to do it. The AI doesn't solve any problem, it just does the typing and helps with syntax. I'm not even sure I'm ultimately more productive.
I find AI is most useful at the ancillary extra stuff. Things that I'd never get to myself. Little scripts of course, but more like "it'd be nice to rename this entire feature / db table / tests to better match the words that the business has started to use to discuss it".
In the past, that much nitpicky detail just wouldn't have gotten done, my time would have been spent on actual features. But what I just described was a 30 minute background thing in claude code. Worked 95%, and needed just one reminder tweak to make it deployable.
The actual work I do is too deep in business knowledge to be AI coded directly, but I do use it to write tests to cover various edge cases, trace current usage of existing code, and so on. I also find AI code reviews really useful to catch 'dumb errors' - nil errors, type mismatches, style mismatch with existing code, and so on. It's in addition to human code reviews, but easy to run on every PR.
Is the alleviating of the mental energy going to make you a worst programmer in the long run? Is this like skipping mental workouts that were ultimately keeping you sharp?
Also in my 40s and above senior level. Theres not many mental workouts in day to day coding because the world is just not a new challenge every day. What I consider 'boilerplate' just expands to include things I've written a dozen times before in a different context. AI can write that to my taste and I can tackle the few actual challenges.
Do coding in non-assembly programming languages make you a worse programmer in the long run because you are not exposed to the deepest level of complexity?
My guess is if we assume the high level and low level programmers are equally proficient in their mediums, they would use the same amount of effort to tackle problems, but the kinds of problems they can tackle are vastly different
At 51, no one hires me because of my coding ability. They hire me because I know how to talk to the “business” and lead (larger projects) or implement (smaller projects) and to help sales close deals.
Don’t get me wrong, I care very deeply about the organization and maintainability of my code and I don’t use “agents”. I carefully build my code (and my infrastructure as code based architecture) piece by piece through prompting.
And I do have enough paranoia about losing my coding ability - and I have lost some because of LLMs - that I keep a year in savings to have time to practice coding for three months while looking for a job.
What kind of codebases do you work on if you don't mind me asking?
I've found a huge boost from using AI to deal with APIs (databases, k8s, aws, ...) but less so on large codebases that needed conceptual improvements. But at worst, i'm getting more than 10% benefit, just cause the AI's can read files so quickly and answer questions and propose reasonable ideas.
Strangely I've found myself more exhausted at the end of the week and I think it's because of the constant supervision necessary to stop Claude from colouring outside the lines when I don't watch it like a hawk.
Also I tend to get more done at a time, it makes it easier to get started on "gruntwork" tasks that I would have procrastinated on. Which in turn can lead to burnout quite quickly.
I think in the end it's just as much "work", just a different kind of work and with more quantity as a result.
For me, this is the biggest benefit of AI coding. And it's energy saved that I can use to focus on higher level problems e.g. architecture thereby increasing my productivity.
I didn't see much mention of tab-completions in the survey and comments here. To me that's the ultimate coding AI is doing at my end, even though it seems to pass unnoticed nowadays. It's massive LOC (and comments!), and that's were I find AI immensely productive.
Don't you have to keep dismissing incorrect auto-complete? For me I have a particular idea in mind, and I find auto-complete to be incredibly annoying.
It breaks flow. It has no idea my intention, but very eagerly provides suggestions I have to stop and swat away.
have tons of text dumped into your text area. Sometimes it looks plausibly right, but with subtle little issues. And you have to carefully analyze whatever it output for correctness (like constant code review).
That we're trying to replace entry/mediocre/expert level code writing with entry/mediocre/expert level code reading is one of the strangest aspects of this whole AI paradigm.
There's literally no way I can see that resulting in better quality, so either that is not what is happening or we're in for a rude awakening at some point.
This is how you know the "AI" proselytizers are completely full of shit. They're trying to bend the narrative with a totally unrealistic scenario where reading and reviewing code is somehow "more efficient" than writing it. This is only true if you
(a) don't know what you're doing and just approve everything you see or
there is no "we" or at least not sufficiently differentiated. Another layer is inserted into .. everything? think MSFT Teams. Your manager's manager is being empowered.. you become a truck driver who must stay on the route, on schedule or be replaced.
Does it actually replace it? I can still get intellisense style suggestions on my ides (various jetbrains and visual studio) and it's still just as useful (what can this object do I wonder...)
Does it even fall into "AI-generated" category? GitHub Copilot has been around for years, I certainly remember using it long before the recent AI boom, and at that time it wasn't even thought of as any kind of a breakthrough.
And at this point it's not just a productivity booster, it's as essential as using a good IDE. I feel extremely uncomfortable and slow writing any code without auto-completion.
I think there is a difference between type system or Language Server completions and AI generated completion.
When the AI tab completion fills in full functions based on the function definition you have half typed, or completes a full test case the moment you start type - mock data values and all, that just feels mind-reading magical.
Tab completions simply hit the bottleneck problem. I dont want to press tab on every line it makes no sense. I would rather AI generate a function block and then integrate it back. Saves me the typing hassle and I can focus in design and business logic.
I’m not yet up to half (because my corporate code base is a mess that doesn’t lend itself well to AI)
But your approach sounds familiar to me. I find sometimes it may be slower and lower quality to use AI, but it requires less mental bandwidth from me, which is sometimes a worthwhile trade off.
I wouldn't say I'm old, but I suddenly fell into the coding agent rabbit hole when I had to write some Python automations against Google APIs.
Found myself having 3-4 different sites open for documentation, context switching between 3 different libraries. It was a lot to take in.
So I said, why not give AI a whirl. It helped me a lot! And since then I have published at least 6 different projects with the help of AI.
It refactors stuff for me, it writes boilerplate for me, most importantly it's great at context switching between different topics. My work is pretty broadly around DevOps, automation, system integration, so the topics can be very wide range.
So no I don't mind it at all, but I'm not old. The most important lesson I learned is that you never trust the AI. I can't tell you how often it has hallucinated things for me. It makes up entire libraries or modules that don't even exist.
It's a very good tool if you already know the topic you have it work on.
But it also hit me that I might be training my replacement. Every time I correct its mistakes I "teach" the database how to become a better AI and eventually it won't even need me. Thankfully I'm very old and will have retired by then.
When it comes to dealing with shitty platforms AI is really the best thing ever. I have had the misfortune of writing automations for Atlassian with their weird handling of refresh keys and had AI not pointed out that Atlassian had the genius idea of invalidating refresh keys after single use, I would have wasted a lot more of my time. For this sort of manual labout, AI is the best tool there is.
This article goes completely against my experience so far.
I teach at an internship program and the main problem with interns since 2023 has been their over reliance on AI tools. I feel like I have to teach them to stop using AI for everything and think through the problem so that they don't get stuck.
Meanwhile many of the seniors around me are stuck in their ways, refusing to adopt interactive debuggers to replace their printf() debug habits, let alone AI tooling...
> Meanwhile many of the seniors around me are stuck in their ways, refusing to adopt interactive debuggers to replace their printf() debug habits, let alone AI tooling...
When I was new to the business, I used interactive debugging a lot. The more experienced I got, the less I used it. printf() is surprisingly useful, especially if you upgrade it a little bit to a log-level aware framework. Then you can leave your debugging lines in the code and switch it on or off with loglevel = TRACE or INFO, something like that.
This is absolutely true. If anything, interactive debuggers are a crutch and actual logging is the real way of debugging. You really can't debug all sorts of things in an interactive debugger, things like timing issues, thread problems, and you certainly can't find the actual hard bugs that are in running services in production, you know, where the bugs actually happen and are found. Or on other people's machines that you can't just attach a debugger. You need good logging with a good logging library that doesn't affect performance too much when it's turned off, and those messages can also provide very useful context to what things are going on, many times as good if not better than a comment, because at least the log messages are compiled in and type checked, as opposed to comments, which can easily go stale.
Both are valid, if your code is slightly complex it's invaluable to run it at least once with a debugger to verify that your logic is all good. And using logs for this is highly inefficient. E.g. if you have huge data structures that are a pain to print, or if after starting the program you notice that you forgot to add some print somewhere needed.
And obviously when you can't hook the debugger, logs are mandatory. Doesn't have to be one or the other.
I kind of had the opposite experience. I used to rely mostly on printfs and etc. but started using debugger more.
printf doesn't improve going up and down the call stacks in the debugger to analyze their chain (you'd have to spam debug printfs all around you expect this chain to happen to replace the debugger which would waste time). debugger is really powerful if you use it more than superficially.
> you'd have to spam debug printfs all around you expect this chain to happen to replace the debugger which would waste time
It's not wasting time, it's narrowing in on the things you know you need to look for and hiding everything else. With a debugger you have to do this step mentally every time you look at the debugger output.
Trying to guess what that chain is and putting printf's all around that path feels like doing a poor simulation of what debugger can do out of the box and unlike us - precisely. So I'd say it's exactly the opposite.
If you only care about some specific spot, then sure - printf is enough, but you also need to recompile things every time you add a new one or change debug related details, while debugger can do it re-running things without recompilation. So if anything, printf method can take more time.
Also, in debugger you can reproduce printf using REPL.
Interactive debuggers and printf() are both completely valid and have separate use-cases with some overlap. If you're trying to use, or trying to get people to use, exclusively one, you've got some things to think about.
That's funny. I remember using interactive debuggers all the time back in the '90s, but it's been a long time since I've bothered. Logging, reading, and thinking is just... easier.
Really ? I find myself thinking the opposite. My program always runs in debug mode, and when there's some issue I put a breakpoint, trigger it, and boom I can check what is wrong. I don't need to stop the program, insert a new line to print what i _guess_ is wrong, restart the program from scratch etc.
Properly debugging my stack is probably one of the first things I setup because I find it way less tedious. Like, for example, if you have an issue in a huge Object or Array, will you actually print all the content, paste it somewhere else and search through the logs ? And by the way, most debuggers also have ability to setup a log points anyways, without having to restart your program. Genuinely curious to know how writing extra lines and having to restart makes things easier.
Of course I'm not saying that I never débug with logs, sometimes it's require or even more efficient, but it's often my second choice.
The right tool for the right job. If someone gets the job done with printf() then that would be good enough for me.
Interactive debuggers are a great way to waste a ton of time and get absolutely nowhere. They do have their uses but those are not all that common. The biggest usecase for me for GDB has been to inspect stacktraces, having a good mental model of the software you are working on is usually enough to tell you exactly what went wrong if you know where it went wrong.
Lots of people spend way too much time debugging code instead of thinking about it before writing.
I've tried the interactive debuggers but I'm yet to find a situation where they worked better than just printing. I use an interactive console to test what stuff does, but inline in the app I've never had anything that printing wasn't the straightforward fast solution.
I'm not above the old print here or there but the value of an interactive debugger is being able to step and inspect the state of variables at all the different call sites, for instance.
I’m only found them to be useful in gargantuan OOP piles where the context is really hard to keep in your head and getting to any given point in execution can take minutes. In those cases interactive debugging has been invaluable.
Nitpicking a bit here but theres nothing wrong with printf debugging. Its immensely helpful to debug concurrent programs where stopping one part would mess up the state and maybe even avoid the bug you were trying to reproduce.
As for tooling, I really love AI coding. My workflow is pasting interfaces in ChatGPT and then just copy pasting stuff back. I usually write the glue code by hand. I also define the test cases and have AI take over those laborious bits. I love solving problems and I genuinely hate typing :)
The old fogeys don't rely on printf because they can't use a debugger, but because a debugger stops the entire program and requires you to go step by step.
Printf gives you an entire trace or log you can glance at, giving you a bird's eye view of entire processes.
In terms of LOCs maybe, in terms of importance I think is much less. At least that's how I use LLMs.
While I understand that <Enter model here> might produce the meaty bits as well, I believe that having a truck factor of basically 0 (since no-one REALLY understands the code) is a recipe for a disaster and I dare say long term maintainability of a code base.
I feel that you need to have someone in any team that needs to have that level of understanding to fix non trivial issues.
However, by all means, I use the LLM to create all the scaffolding, test fixtures, ... because that is mental energy that I can use elsewhere.
Agreed. If I use an LLM to generate fairly exhaustive unit tests of a trivial function just because I can, that doesn’t mean those lines are as useful as core complex business logic that it would almost certainly make subtle mistakes in.
I think the parent commentors point was that it is nearly trivial to generate variations on unit tests in most (if not all) unit test frameworks. For example:
A belief that the ability of LLMs to generate parameterizations is intrinsically helpful to a degree which cannot be trivially achieved in most mainstream programming languages/test frameworks may be an indicator that an individual has not achieved a substantial depth of experience.
The useful part is generating the mocks. The various auto mocking frameworks are so hit or miss I end up having to manually make mocks which is time consuming and boring. LLMs help out dramatically and save literally hours of boring error prone work.
Why mock at all? Spend the time making integration tests fast. There is little reason a database, queue, etc. can't be set up in a per-test group basis and be made fast. Reliable software is built upon (mostly) reliable foundations.
hmmmm. I do like integration tests, but I often tell people the art of modern software is to make reliable systems on top of unreliable components. And the integration tests should 100% include times when the network flakes out and drops 1/2 of replies and corrupts msgs and the like.
> I do like integration tests, but I often tell people the art of modern software is to make reliable systems on top of unreliable components.
There is a dramatic difference between unreliable in the sense of S3 or other services and unreliable as in "we get different sets of logical outputs when we provide the same input to a LLM". In the first, you can prepare for what are logical outcomes -- network failures, durability loss, etc. In the latter, unless you know the total space of outputs for a LLM you cannot prepare. In the operational sense, LLMs are not a system component, they are a system builder. And a rather poor one, at that.
> And the integration tests should 100% include times when the network flakes out and drops 1/2 of replies and corrupts msgs and the like.
Yeah, it's not that hard to include that in modern testing.
There are thousands of projects out there that use mocks for various reasons, some good, some bad, some ugly. But it doesn't matter: most engineers on those projects do not have the option to go another direction, they have to push forward.
In this context, why not refactor (and have your LLM of choice) write and optimize the integration tests for you? If the crux of the argument for LLMs is that it is capable of producing sufficient quality software and dramatically reduced costs, why not have it rewrite tests?
Parameterized tests are good, but I think he might be talking about exercising all the corner cases in the logic of your function, which to my knowledge almost no languages can auto-generate for but LLMs can sorta-ish figure it out.
We are talking about basic computing for CRUD apps. When you start needing to rely upon "sorta-ish" to describe the efficacy or a tool for such a straightforward and deterministic use case, it may be an indicator you need to rethink your approach.
If you want to discount a tool that may save you an immense amount of time because you might have to help it along the fast few feet, thats up to you.
If you can share a tool that can analyze a function and create a test for all corner cases in a popular language, I'm sure some people would be interested in that.
The other thing they do is conveniently not mention all the negative stuff about AI that the source article mentions, they only report on the portion of content from the source that's in any way positive of AI.
And of course, its an article based on a source article based on a survey (of a single company), with the source article written by a "content marketing manager", and the raw data of the survey isn't released/published, only some marketing summary of what the results (supposedly) were. Very trustworthy.
“AI” is great for coding in the small, it’s like having a powerful semantic code editor, or pairing with a junior developer who can lookup some info online quickly. The hardest part of the job was never typing or figuring out some API bullshit anyway.
But trying to use it like “please write this entire feature for me” (what vibe coding is supposed to mean) is the wrong way to handle the tool IMO. It turns into a specification problem.
I find this half state kind of useless. If I have to know and understand the code being generated, it's easier to just write it myself. The AI tools can just spit out function names and tools I don't know off the top of my head, and the only way to check they are correct is to go look up the documentation, and at that point I've just done the hard work I wanted to avoid.
Feels like a similar situation to self driving where companies want to insist that you should be fully aware and ready to take over in an instant when things go wrong. That's just not how your brain works. You either want to fully disengage, or be actively doing the work.
> The AI tools can just spit out function names and tools I don't know off the top of my head, and the only way to check they are correct is to go look up the documentation, and at that point I've just done the hard work I wanted to avoid.
This is exactly my experience, but I guess generating code with depreciated methods is useful for some people.
This, IMHO, is the critical point and why a lot of “deep” development work doesn’t benefit much from the current generation of AI tools.
Last week, I was dealing with some temporal data. I often find working in this area a little frustrating because you spend so much time dealing with the inherent traps and edge cases, so using an AI code generator is superficially attractive. However, the vast majority of my time wasn’t spent writing code, it was getting my head around what the various representations of certain time-based events in this system actually mean and what should happen when they interact. I probably wrote about 100 test cases next, each covering a distinct real world scenario, and working out how to parameterise them so the coverage was exhaustive for certain tricky interactions also required a bit of thought. Finally, I wrote the implementation of this algorithm that had a lot of essential complexity, which means code with lots of conditionals that needs to be crystal clear about why things are being done in a certain order and decisions made a certain way, so anyone reading it later has a fighting chance of understanding it. Which of those three stages would current AI tools really have helped with?
I find AI code generators can be quite helpful for low-level boilerplate stuff, where the required behaviour is obvious and the details tend to be a specific database schema or remote API spec. No doubt some applications consist almost entirely of this kind of code, and I can easily believe that people working on those find AI coding tools much more effective than I typically do. But as 'manoDev says in the parent comment, deeper work is often a specification problem. The valuable part is often figuring out the what and the why rather than the how, and so far that isn’t something AI has been very good at.
Yes, but in my experience actually no. At least not with the bleeding edge models today. I've been able to get LLMs to write whole features to the point that I'm quite surprised at the result. Perhaps I'm talking to it right (the new "holding it right"?). I tend to begin asking for an empty application with the characteristics I want (CLI, has subcommands, ...) then I ask it to add a simple feature. Get that working then ask it to enhance functionality progressively, testing as we go. Then when functionality is working I ask for a refactor (often it puts 1500 loc in one file, for example), doc, improve help text, and so on. Basically the same way you'd manage a human.
I've also been close to astonished at the capability LLMs have to draw conclusions from very large complex codebases. For example I wanted to understand the details of a distributed replication mechanism in a project that is enormous. Pre-LLM I'd spent a couple of days crawling through the code using grep and perhaps IDE tools, making notes on paper. I'd probably have to run the code or instrument it with logging then look at the results in a test deployment. But I've found I can ask the LLM to take a look at the p2p code and tell me how it works. Then ask it how the peer set is managed. I can ask it if all reachable peers are known at all nodes. It's almost better than me at this, and it's what I've done for a living for 30 years. Certainly it's very good for very low cost and effort. While it's chugging I can think about higher order things.
I say all this as a massive AI skeptic dating back to the 1980s.
> I tend to begin asking for an empty application with the characteristics I want (CLI, has subcommands, ...) then I ask it to add a simple feature.
That makes sense, as you're breaking the task into smaller achievable tasks. But it takes an already experienced developer to think like this.
Instead, a lot of people in the hype train are pretending an AI can work an idea to production from a "CEO level" of detail – that probably ain't happening.
> you're breaking the task into smaller achievable tasks.
this is the part that I would describe as engineering in the first place. This is the part that separates a script kiddie or someone who "knows" one language and can be somewhat dangerous with it, from someone who commands a $200k/year salary, and it is the important part
and so far there is no indication that language models can do this part at. all.
for someone who CAN do the part of breaking down a problem into smaller abstractions, though, some of these models can save you a little time, sometimes, in cases where it's less effort to type an explanation to the problem than it is to type the code directly..
All the hype is about asking an LLM to start with an empty project with loose requirements. Asking it to work on a million lines of legacy code (inadequately tested, as all legacy code) with ancient and complex contracts is a completely different experience.
Very large projects are an area where AI tools can really empower developers without replacing them.
It is very useful to be able to ask basic questions about the code that I am working on, without having to read through dozens of other source files. It frees up a lot of time to actually get stuff done.
Bear in mind those are revenue figures, they're costing claude hundreds a day.
One imagines Leadership won't be so pleased after the inevitably price hike (which, given the margins software uses, is going to be in the 1-3 thousands a day) and the hype wears off enough for them to realize they're spending a full salary automating a partial FTE.
But, by the looks of things, models will be more efficient by then and a cheaper-to-run model will produce comparable output. At least that's how it's been with OSS models, or with the Openai api model. So maybe the inevitable price hike (or rate limiting) may lead to switching models / providers and the results being just as good.
Most of code that "needs to be written" is just a copy
of something standard, "Do X in the simplest way possible"
code that doesn't need optimizations, and writing
it by hand is just waste of time.
AI is good enough to write megabytes of that code,
since its statistically common and part of tons of codebases.
Its the other half of code that AI can't handle,
that you need to manually verify it doesn't hallucinate
fantastic stuff that manages to compile, but doesn't work.
Naive question but wouldn't it could as having AI write 50%+ of your code if you just use an unintelligent complete-the-line AI tool? In this case the AI is hardly doing anything intelligent, but is still getting credit for doing most of the work.
Yes there is even a small business that champions an small LLM that is trained on the language Lsp, your code base, and your recent coding history (not necessarily commit, but any time one presses ctrl + s). How it works is essentially autocomplete. This functionality is packaged as an IDE plugin: TabNine
However now they try to sell subscriptions to LLMs.
Tabnine has been in the scene since at least 2018.
Article did not say what kind languages/applications thosd 791 developers were working on. I work on a legacy Java code base (which looks more like C than Java, thankfully) and I cant imagine AI doing any of it. It can do small isolated, well formulated chunks (functions that do a very specific task) but even that will require very verbose explanation.
I just can't fathom shipping a big percentage of work using LLMs.
This is self reported unless I missed something. I bet that skews these results quite a bit. Many are very hesitant to say they use AI, and I suspect that's much more likely to be the case when you are new to the field.
Also, green coding? That's new to me. I guess we'll see optional carbon offset purchasing in our subs soon.
One thing I don't hear a lot of people talk about is building prototypes. That's where I see a gigantic time savings. It doesn't have to be beautiful code, it just has to help me answer a question so I can make a decision about where to go next. That and tools. There have been many times where I've wanted to build a task-specific tool but justifying the time would be hard. Now I can create little tools like that, and it's a huge productivity boost.
But I’ve come full circle and have gone back to hand coding after a couple years of fighting LLMs. I’m tired of coaxing their style and fixing their bugs - some of which are just really dumb and some are devious.
I'm in the same exact boat. I started with a lot of different tools but eventually went back to hand coding everything. When using tools like co-pilot I noticed I would ship a lot mode dumb mistakes. I even experimented with not even using a chat interface and it turns out that a lot of answers to problems are indeed found with a web search.
I've also just turned off copilot now. I had several cases where bugs in the generated code slipped through and ended up deployed. Bugs I never would have written myself. Reviewing code properly is so much harder than writing it from scratch.
Even then I’ve mostly given up. I’ve seen LLMs change from snake case to camel case for a single method and leave the rest untouched. I’ve seen them completely fabricate APIs to non existent libraries. I’ve seen them get mathematical formulae completely wrong. I’ve seen it make entire methods for things that are builtins of a library I’m already using.
It’s just not worth it anymore for anything that is part of an actual product.
Occasionally I will still churn out little scripts or methods from scratch that are low risk - but anything that gets to prod is pretty much hand coded again.
It basically uses multiple different LLMs from different providers to debate a change or code review. Opus 4.1, Gemini 2.5 Pro, and GPT-5 all have a go at it before it writes out plans or makes changes.
I can be massively more ambitious when coding with AI, but most importantly I have zero emotional investment in the code so I can throw it away and start again whenever I want.
I tried it - didn't like it. Had an LLM work on a backup script since I don't use Bash very often. Took a bunch of learning the quirks of bash to get the code working properly.
While I'll say it got me started, it wasn't a snap of the fingers and a quick debug to get something done. Took me quite a while to figure out why something worked but really it didn't (LLM using command line commands where Bash doesn't interpret the results the same).
If its something I know, probably wont use LLM (as it doesn't do my style). If it's something I don't know, might use it to get me started but I expect that's all I'll it for.
Can I ask which agent/model you used?
I'm similarly irritated with shell script coding, but find I have to make scripts fairly often. My experience using various models but latterly Claude Code has been quite different -- it churned out pretty much what I was looking for. Also old, fwiw. I'm older than all shells.
I think they're being really loose with the term "vibe coding", and what they really mean is AI-assisted coding.
Older devs are not letting the AI do everything for them. Assuming they're like me, the planning is mostly done by a human, while the coding is largely done by the AI, but in small sections with the human giving specific instructions.
Then there's debugging, which I don't really trust the AI to do very well. Too many times I've seen it miss the real problem, then try to rewrite large sections of the code unnecessarily. I do most of the debugging myself, with some assistance from the AI.
> Assuming they're like me, the planning is mostly done by a human, while the coding is largely done by the AI
I've largely settled on the opposite. AI has become very good at planning what to do and explaining it in plain English, but its command of programming languages still leaves a lot to be desired.
Yes, much like many of the humans I have worked with, sometimes bad choices are introduced. But those bad choices are caught during the writing of the code, so that's not really that big of a deal when it does happen. It is still a boon to have it do most of the work.
And remains markably better than when AI makes bad choices while writing code. That is much harder to catch and requires pouring over the code with a fine tooth comb to the point that you may as well have just written it yourself, negating all the potential benefits of using it to generate code in the first place.
When debugging, I'll coax the AI to determine what went wrong first - to my satisfaction - and have it go from there. Otherwise it's a descent into madness.
I've been at this for many years. If I want to implement a new feature that ties together various systems and delivers an expected output, I know the general steps that I need to take. About 80% of those steps are creating and stubbing out new files with the general methods and objects I know will be needed, and all the test cases. So... I could either spend the next 4 hours doing that, or spend 3 minutes filling out a CLAUDE.md with the specs and 5 minutes having Claude do it (and fairly well).
I feel no shame in doing the later. I've also learned enough about LLMs that I know how to write that CLAUDE.md so it sticks to best practices. YMMV.
I think at this point it's whoever can get the most useful work out of AI which is actually really hard due to their 'incomplete' state. Finding uses which require very little user input is going to be the next big thing in my opinion since it seems that LLMs are currently at a wall where they require technical advancements before they can overcome it.
I often use LLMs for method level implementation work. Anything beyond the scope of a single function call I have very little confidence in. This is OK though, since everything is a function and I can perfectly control the blast radius as long as I keep my hands on the steering wheel. I don't ever let the LLM define method signatures for me.
If I don't know how to structure functions around a problem, I will also use the LLM, but I am asking it to write zero code in this case. I am just having a conversation about what would be good paths to consider.
Been in the industry professionally since 1996 writing code and before that 10 years as a hobbyist between a little BASIC, C and a lot of assembly (65C02, 68k, PPC and x86).
In the last 6 months, when I have had an assignment that involved coding, AI has generated 100% of my code. I just described the abstractions I wanted and reusable modules/classes I needed and built on it.
Is "vibe coding" synonymous with using AI code-generation tools now?
I thought vibe coding meant very little direct interaction with the code, mostly telling the LLM what you want and iterating using the LLM. Which is fun and worth trying, but probably not a valid professional tool.
I think what happened is that a lot of people started dismissing all LLM code creation as "vibe coding" because those people were anti-LLM, and so the term itself became an easy umbrella pejorative.
And then, more people saw these critics using "vibe coding" to refer to all LLM code creation, and naturally understood it to mean exactly that. Which means the recent articles we've seen about how good vibe coding starts with a requirements file, then tests that fail, then tests that pass, etc.
Like so many terms that started out being used pejoratively, vibe coding got reclaimed. And it just sounds cool.
Also because we don't really have any other good memorable term for describing code built entirely with LLM's from the ground up, separate from mere autocomplete AI or using LLM's to work on established codebases.
“Agentic coding” is probably more accurate, though many people (fairly) find the term “Agentic” to be buzz-wordy and obnoxious.
I’m willing to vibe code a spike project. That is to say, I want to see how well some new tool or library works, so I’ll tell the LLM to build a proof of concept, and then I’ll study that and see how I feel about it. Then I throw it away and build the real version with more care and attention.
I have "vibe coded" a few internal tools now that are very low risk in terms of negative business impact but nonetheless valuable for our team's efficiency.
E.g one tool packages a debug build of an iOS simulator app with various metadata and uploads it to a specified location.
Another tool spits out my team's github velocity metrics.
These were relatively small scripting apps, that yes, I code reviewed and checked for security issues.
I don't see why this wouldn't be a valid professional tool? It's working well, saves me time, is fun, and safe (assuming proper code review, and LLM tool usage).
With these little scripts it creates it's actually pretty quick to validate their safety and efficacy. They're like validating NP problems.
The original definition of vibe coding meant that you just let the agent write everything, and if it works then you commit it. Your code review and security check turned this from vibe coding into something else.
This is complicated by the fact that some people use “vibe coding” to mean any kind of LLM-assisted coding.
Yeah, for some reason the term has been used interchangeably for a while, which is making it very hard to have a conversation about it since many people think vibe coding is just using AI to assist you.
From Karpathy's original post I understood it to be what you're describing. It is getting confusing.
My personal definition of "vibe coding" is when a developer delegates -- abdicates, really -- responsibility for understanding & testing what AI-generated code is doing and/or how that result is achieved. I consider it something that's separate from & inferior to using AI as a development tool.
I think there is actually a pressure to show thst you are using AI (stories of ceos firing employees who supposedly did not "embrace" ai). So people are over attributing to AI. Though originally VC was meant to be infinite monkey style button smashing, people are attributing to VC just to avoid the cross hairs.
Jesus, that's a staggering figure to me coming from senior developers. I guess I'm the odd one out here, but ChatGPT is nothing more than an index of Stack Overflow (and friends) for me. It's essentially replaced Googling, but once I get the answer I need I'm still just slinging code like an asshole. Copying the output wholesale from any of these LLMs just seems crazy to me.
I'm not a coder but a sysadmin. 35 years or so. I'm conversant with Perl, Python, (nods to C), BASIC, shell, Powershell, AutoIT (int al)
I muck about with CAD - OpenSCAD, FreeCAD, and 3D printing.
I'm not a senior developer - I pay them.
LLMs are handy in the same way I still have my slide rules and calculators (OK kids I use a calc app) but I do still have my slide rules.
ChatGPT does quite well with the basics for a simple OpenSCAD effort but invents functions within libraries. That is to be expected - its a next token decider function and not a real AI.
I just got back into OpenSCAD after recently getting my first new 3D Printer in 10 years, so I basically had to relearn it. ChatGPT got the syntax wrong for the most basic of operations.
Indeed. This is my biggest fear for engineers as a whole. LLMs can be a great productivity boost in the very short term, but can so easily be abused. If you build a product with it, suddenly everyone is an engineering manager and no one is an expert on it. And growth as an engineer is stunted. It reminds me of abusing energy drinks or grinding to the point of burnout... But worse.
I think we'll find a middle ground though. I just think it hasn't happened yet. I'm cautiously optimistic.
Validity of the sample size is not determined by its fraction of the whole population. I don't know the formulas and I'm not a statistician. Maybe someone can drop some citations.
I generally agree with this just from a perspective of personal sentiment, it does feel wrong.
But statistically speaking, at a 95% confidence level you'd be within a +/- 3.5% margin of error given the 791 sample size, irrespective of whether the population is 30k or 30M.
You should read more about statistical significance. Under some reasonable assumptions, you can confidently certain deduce things with small sample sizes.
From another perspective: we've deduced a lot of things about how atoms work without any given experiment inspecting more than an insignificant fraction of all atoms.
TL;DR: The population size (25e6 total devs, 1e80 atoms in observable universe) is almost entirely irrelevant to hypothesis testing.
Haha right. I would imagine a "Senior Developer" who is super into AI assisted coding would be more likely to come across this survey and want to participate.
It's 80% at least for me. I've hit a groove for sure. It's not only 80% is very well tested, and almost exactly how I would have preferred. Big tips are to design your CLAUDE.md files how you would actually code from a high level perspective, and not try the usual "You are an expert Google Distributed Engineer" and all that embarassing AI hype bro crap.
I’m still yet to find a use-case for AI-generated code in my workflow.
Even when I am building tools that heavily utilize modern AI, I haven’t found it. Recently, I disabled the AI-powered code completion in my IDE because I found that the cognitive load required to evaluate the suggestions it provided was greater and more time consuming than just writing the code I was already going to write anyways.
I don’t know if this is an experience thing or not - I mainly work on a tech stack I have over a decade of experience in - but I just don’t see it.
Others have suggested generating tests with AI but I find that horrifying. Tests are the one thing you should be the most anal about accuracy on compared to anything else in your codebase.
Apparently vibe coding now just means ai assisted coding beyond immediate code completion?
For me, success with LLM-assisted coding comes when I have a clear idea of what I want to accomplish and can express it clearly in a prompt. The relevant key business and technical concerns come into play, including complexities like balancing somewhat conflicting shorter and longer term concerns.
Juniors are probably all going to have to be learning this kind of stuff at an accelerated rate now (we don't need em cranking out REST endpoints or whatever anymore), but at this point this takes a senior perspective and senior skills.
Anyone can get an LLM and agentic tool to crank out code now. But you really need to have them crank out code to do something useful.
around a third of senior developers with more than a decade of experience are using AI code-generation tools such as Copilot, Claude, and Gemini to produce over half of their finished software, compared to 13 percent for those devs who've only been on the job for up to two years.
A third? I would expect at least a majority based on the headline and tone of the article... Isn't this saying 66% are down on vibe coding?
Seems about right for me (older developer at a big tech company). But we need to define what it means that the code is AI-generated. In my case, I typically know how I want the code to look like, and I'm writing a prompt to tell the agent to do it. The AI doesn't solve any problem, it just does the typing and helps with syntax. I'm not even sure I'm ultimately more productive.
Yeah I’m still not more productive. Maybe 10% more. But it alleviates a lot of mental energy, which is very nice at the age of 40.
I find AI is most useful at the ancillary extra stuff. Things that I'd never get to myself. Little scripts of course, but more like "it'd be nice to rename this entire feature / db table / tests to better match the words that the business has started to use to discuss it".
In the past, that much nitpicky detail just wouldn't have gotten done, my time would have been spent on actual features. But what I just described was a 30 minute background thing in claude code. Worked 95%, and needed just one reminder tweak to make it deployable.
The actual work I do is too deep in business knowledge to be AI coded directly, but I do use it to write tests to cover various edge cases, trace current usage of existing code, and so on. I also find AI code reviews really useful to catch 'dumb errors' - nil errors, type mismatches, style mismatch with existing code, and so on. It's in addition to human code reviews, but easy to run on every PR.
Is the alleviating of the mental energy going to make you a worst programmer in the long run? Is this like skipping mental workouts that were ultimately keeping you sharp?
Also in my 40s and above senior level. Theres not many mental workouts in day to day coding because the world is just not a new challenge every day. What I consider 'boilerplate' just expands to include things I've written a dozen times before in a different context. AI can write that to my taste and I can tackle the few actual challenges.
Do coding in non-assembly programming languages make you a worse programmer in the long run because you are not exposed to the deepest level of complexity?
My guess is if we assume the high level and low level programmers are equally proficient in their mediums, they would use the same amount of effort to tackle problems, but the kinds of problems they can tackle are vastly different
At 51, no one hires me because of my coding ability. They hire me because I know how to talk to the “business” and lead (larger projects) or implement (smaller projects) and to help sales close deals.
Don’t get me wrong, I care very deeply about the organization and maintainability of my code and I don’t use “agents”. I carefully build my code (and my infrastructure as code based architecture) piece by piece through prompting.
And I do have enough paranoia about losing my coding ability - and I have lost some because of LLMs - that I keep a year in savings to have time to practice coding for three months while looking for a job.
What kind of codebases do you work on if you don't mind me asking?
I've found a huge boost from using AI to deal with APIs (databases, k8s, aws, ...) but less so on large codebases that needed conceptual improvements. But at worst, i'm getting more than 10% benefit, just cause the AI's can read files so quickly and answer questions and propose reasonable ideas.
How are you quantifying that 10% ?
Strangely I've found myself more exhausted at the end of the week and I think it's because of the constant supervision necessary to stop Claude from colouring outside the lines when I don't watch it like a hawk.
Also I tend to get more done at a time, it makes it easier to get started on "gruntwork" tasks that I would have procrastinated on. Which in turn can lead to burnout quite quickly.
I think in the end it's just as much "work", just a different kind of work and with more quantity as a result.
Now we just need an AI that helps with more quality.
> it alleviates a lot of mental energy
For me, this is the biggest benefit of AI coding. And it's energy saved that I can use to focus on higher level problems e.g. architecture thereby increasing my productivity.
I didn't see much mention of tab-completions in the survey and comments here. To me that's the ultimate coding AI is doing at my end, even though it seems to pass unnoticed nowadays. It's massive LOC (and comments!), and that's were I find AI immensely productive.
Don't you have to keep dismissing incorrect auto-complete? For me I have a particular idea in mind, and I find auto-complete to be incredibly annoying.
It breaks flow. It has no idea my intention, but very eagerly provides suggestions I have to stop and swat away.
Yeah, [autocomplete: I totally agree]
it's so [great to have auto-complete]
annoying to constantly [have to type]
have tons of text dumped into your text area. Sometimes it looks plausibly right, but with subtle little issues. And you have to carefully analyze whatever it output for correctness (like constant code review).
That we're trying to replace entry/mediocre/expert level code writing with entry/mediocre/expert level code reading is one of the strangest aspects of this whole AI paradigm.
There's literally no way I can see that resulting in better quality, so either that is not what is happening or we're in for a rude awakening at some point.
https://www.joelonsoftware.com/2000/05/26/reading-code-is-li...
https://www.joelonsoftware.com/2000/04/06/things-you-should-... (read the bold text in the middle of the article)
These articles are 25 years old.
This is how you know the "AI" proselytizers are completely full of shit. They're trying to bend the narrative with a totally unrealistic scenario where reading and reviewing code is somehow "more efficient" than writing it. This is only true if you
(a) don't know what you're doing and just approve everything you see or
(b) don't care how bad things get
there is no "we" or at least not sufficiently differentiated. Another layer is inserted into .. everything? think MSFT Teams. Your manager's manager is being empowered.. you become a truck driver who must stay on the route, on schedule or be replaced.
fwiw, VS Code has a snooze auto complete button. Each press is 5m, a decently designed feature imo
Copilot replacing intellisense is a huge shame. Why get actual auto complete when you can get completely hallucinated methods and properties instead.
Does it actually replace it? I can still get intellisense style suggestions on my ides (various jetbrains and visual studio) and it's still just as useful (what can this object do I wonder...)
Does it even fall into "AI-generated" category? GitHub Copilot has been around for years, I certainly remember using it long before the recent AI boom, and at that time it wasn't even thought of as any kind of a breakthrough.
And at this point it's not just a productivity booster, it's as essential as using a good IDE. I feel extremely uncomfortable and slow writing any code without auto-completion.
I think there is a difference between type system or Language Server completions and AI generated completion.
When the AI tab completion fills in full functions based on the function definition you have half typed, or completes a full test case the moment you start type - mock data values and all, that just feels mind-reading magical.
I haven’t tried it lately but how well are these models in generating property based tests?
Tab completions simply hit the bottleneck problem. I dont want to press tab on every line it makes no sense. I would rather AI generate a function block and then integrate it back. Saves me the typing hassle and I can focus in design and business logic.
I’m not yet up to half (because my corporate code base is a mess that doesn’t lend itself well to AI)
But your approach sounds familiar to me. I find sometimes it may be slower and lower quality to use AI, but it requires less mental bandwidth from me, which is sometimes a worthwhile trade off.
I wouldn't say I'm old, but I suddenly fell into the coding agent rabbit hole when I had to write some Python automations against Google APIs.
Found myself having 3-4 different sites open for documentation, context switching between 3 different libraries. It was a lot to take in.
So I said, why not give AI a whirl. It helped me a lot! And since then I have published at least 6 different projects with the help of AI.
It refactors stuff for me, it writes boilerplate for me, most importantly it's great at context switching between different topics. My work is pretty broadly around DevOps, automation, system integration, so the topics can be very wide range.
So no I don't mind it at all, but I'm not old. The most important lesson I learned is that you never trust the AI. I can't tell you how often it has hallucinated things for me. It makes up entire libraries or modules that don't even exist.
It's a very good tool if you already know the topic you have it work on.
But it also hit me that I might be training my replacement. Every time I correct its mistakes I "teach" the database how to become a better AI and eventually it won't even need me. Thankfully I'm very old and will have retired by then.
I love the split personality vibe here.
Or perhaps the commenter just aged a lot while writing the post.
First line: "I wouldn't say I'm old"
Last line: "Thankfully I'm very old"
Hmm.....
Maybe he meant "I'll be very old..." the second time.
You probably jest, but I'm sure some HN users do actually have split personalities. (or dissociative identities, as they're called nowadays)
And I’m sure some HN users are autistic or have other traits that make them unable to appreciate a joke.
When it comes to dealing with shitty platforms AI is really the best thing ever. I have had the misfortune of writing automations for Atlassian with their weird handling of refresh keys and had AI not pointed out that Atlassian had the genius idea of invalidating refresh keys after single use, I would have wasted a lot more of my time. For this sort of manual labout, AI is the best tool there is.
One time use refresh keys is not all that uncommon, probably more so than not, but lots of clients handle that for you
> invalidating refresh keys after single use
That's called refresh token rotation and is a valid security practice.
I know but the RFC doesnt mandate it. https://datatracker.ietf.org/doc/html/rfc6749#section-6
Not sure why Google doesnt do this but Atlassian does.
Google OAuth2 refresh tokens are definitely singe use.
Atleast not documented here https://developers.google.com/identity/protocols/oauth2#5.-r.... They have a limit on the number of tokens but not on number of uses per token.
This article goes completely against my experience so far.
I teach at an internship program and the main problem with interns since 2023 has been their over reliance on AI tools. I feel like I have to teach them to stop using AI for everything and think through the problem so that they don't get stuck.
Meanwhile many of the seniors around me are stuck in their ways, refusing to adopt interactive debuggers to replace their printf() debug habits, let alone AI tooling...
> Meanwhile many of the seniors around me are stuck in their ways, refusing to adopt interactive debuggers to replace their printf() debug habits, let alone AI tooling...
When I was new to the business, I used interactive debugging a lot. The more experienced I got, the less I used it. printf() is surprisingly useful, especially if you upgrade it a little bit to a log-level aware framework. Then you can leave your debugging lines in the code and switch it on or off with loglevel = TRACE or INFO, something like that.
> printf() is surprisingly useful, especially if you upgrade it a little bit to a log-level aware framework.
What do you mean by this? Do you mean using a logging framework instead of printf()?
This is absolutely true. If anything, interactive debuggers are a crutch and actual logging is the real way of debugging. You really can't debug all sorts of things in an interactive debugger, things like timing issues, thread problems, and you certainly can't find the actual hard bugs that are in running services in production, you know, where the bugs actually happen and are found. Or on other people's machines that you can't just attach a debugger. You need good logging with a good logging library that doesn't affect performance too much when it's turned off, and those messages can also provide very useful context to what things are going on, many times as good if not better than a comment, because at least the log messages are compiled in and type checked, as opposed to comments, which can easily go stale.
Both are valid, if your code is slightly complex it's invaluable to run it at least once with a debugger to verify that your logic is all good. And using logs for this is highly inefficient. E.g. if you have huge data structures that are a pain to print, or if after starting the program you notice that you forgot to add some print somewhere needed.
And obviously when you can't hook the debugger, logs are mandatory. Doesn't have to be one or the other.
> verify your logic
This is what unit tests are for.
And no one has ever written a buggy unit test.
That’s what the code is for.
Really? I’ve been using interactive debuggers since the days of Turbo C/Turbo Pascal in the mid 1990s.
Yes you need good logging also.
I kind of had the opposite experience. I used to rely mostly on printfs and etc. but started using debugger more.
printf doesn't improve going up and down the call stacks in the debugger to analyze their chain (you'd have to spam debug printfs all around you expect this chain to happen to replace the debugger which would waste time). debugger is really powerful if you use it more than superficially.
> you'd have to spam debug printfs all around you expect this chain to happen to replace the debugger which would waste time
It's not wasting time, it's narrowing in on the things you know you need to look for and hiding everything else. With a debugger you have to do this step mentally every time you look at the debugger output.
Trying to guess what that chain is and putting printf's all around that path feels like doing a poor simulation of what debugger can do out of the box and unlike us - precisely. So I'd say it's exactly the opposite.
If you only care about some specific spot, then sure - printf is enough, but you also need to recompile things every time you add a new one or change debug related details, while debugger can do it re-running things without recompilation. So if anything, printf method can take more time.
Also, in debugger you can reproduce printf using REPL.
Interactive debuggers and printf() are both completely valid and have separate use-cases with some overlap. If you're trying to use, or trying to get people to use, exclusively one, you've got some things to think about.
That's funny. I remember using interactive debuggers all the time back in the '90s, but it's been a long time since I've bothered. Logging, reading, and thinking is just... easier.
Really ? I find myself thinking the opposite. My program always runs in debug mode, and when there's some issue I put a breakpoint, trigger it, and boom I can check what is wrong. I don't need to stop the program, insert a new line to print what i _guess_ is wrong, restart the program from scratch etc.
Properly debugging my stack is probably one of the first things I setup because I find it way less tedious. Like, for example, if you have an issue in a huge Object or Array, will you actually print all the content, paste it somewhere else and search through the logs ? And by the way, most debuggers also have ability to setup a log points anyways, without having to restart your program. Genuinely curious to know how writing extra lines and having to restart makes things easier.
Of course I'm not saying that I never débug with logs, sometimes it's require or even more efficient, but it's often my second choice.
Also conditional breakpoints. I.e. break on this line if foo==5
I couldn't imagine going back to print statement based debugging. Would be a massive waste of time.
The right tool for the right job. If someone gets the job done with printf() then that would be good enough for me.
Interactive debuggers are a great way to waste a ton of time and get absolutely nowhere. They do have their uses but those are not all that common. The biggest usecase for me for GDB has been to inspect stacktraces, having a good mental model of the software you are working on is usually enough to tell you exactly what went wrong if you know where it went wrong.
Lots of people spend way too much time debugging code instead of thinking about it before writing.
Oh, and testing >> debugging.
I've tried the interactive debuggers but I'm yet to find a situation where they worked better than just printing. I use an interactive console to test what stuff does, but inline in the app I've never had anything that printing wasn't the straightforward fast solution.
I'm not above the old print here or there but the value of an interactive debugger is being able to step and inspect the state of variables at all the different call sites, for instance.
I’m only found them to be useful in gargantuan OOP piles where the context is really hard to keep in your head and getting to any given point in execution can take minutes. In those cases interactive debugging has been invaluable.
I guess that’s the difference. I do rails dev mostly and it’s just put a print statement in, then run the unit test. It’s a fast feedback loop.
Nitpicking a bit here but theres nothing wrong with printf debugging. Its immensely helpful to debug concurrent programs where stopping one part would mess up the state and maybe even avoid the bug you were trying to reproduce.
As for tooling, I really love AI coding. My workflow is pasting interfaces in ChatGPT and then just copy pasting stuff back. I usually write the glue code by hand. I also define the test cases and have AI take over those laborious bits. I love solving problems and I genuinely hate typing :)
The old fogeys don't rely on printf because they can't use a debugger, but because a debugger stops the entire program and requires you to go step by step.
Printf gives you an entire trace or log you can glance at, giving you a bird's eye view of entire processes.
Most decent debuggers have condițional breakpoints.
And have since the mid 1990s at least…
In terms of LOCs maybe, in terms of importance I think is much less. At least that's how I use LLMs.
While I understand that <Enter model here> might produce the meaty bits as well, I believe that having a truck factor of basically 0 (since no-one REALLY understands the code) is a recipe for a disaster and I dare say long term maintainability of a code base.
I feel that you need to have someone in any team that needs to have that level of understanding to fix non trivial issues.
However, by all means, I use the LLM to create all the scaffolding, test fixtures, ... because that is mental energy that I can use elsewhere.
Agreed. If I use an LLM to generate fairly exhaustive unit tests of a trivial function just because I can, that doesn’t mean those lines are as useful as core complex business logic that it would almost certainly make subtle mistakes in.
> If I … generate fairly exhaustive unit tests of a trivial function
… then you are not a senior software engineer
Neither are you if that's your understanding of a senior engineer
I think the parent commentors point was that it is nearly trivial to generate variations on unit tests in most (if not all) unit test frameworks. For example:
Java: https://docs.parasoft.com/display/JTEST20232/Creating+a+Para...
C# (nunit, but xunit has this too): https://docs.nunit.org/articles/nunit/technical-notes/usage/...
Python: https://docs.pytest.org/en/stable/example/parametrize.html
cpp: https://google.github.io/googletest/advanced.html
A belief that the ability of LLMs to generate parameterizations is intrinsically helpful to a degree which cannot be trivially achieved in most mainstream programming languages/test frameworks may be an indicator that an individual has not achieved a substantial depth of experience.
The useful part is generating the mocks. The various auto mocking frameworks are so hit or miss I end up having to manually make mocks which is time consuming and boring. LLMs help out dramatically and save literally hours of boring error prone work.
Why mock at all? Spend the time making integration tests fast. There is little reason a database, queue, etc. can't be set up in a per-test group basis and be made fast. Reliable software is built upon (mostly) reliable foundations.
hmmmm. I do like integration tests, but I often tell people the art of modern software is to make reliable systems on top of unreliable components. And the integration tests should 100% include times when the network flakes out and drops 1/2 of replies and corrupts msgs and the like.
> I do like integration tests, but I often tell people the art of modern software is to make reliable systems on top of unreliable components.
There is a dramatic difference between unreliable in the sense of S3 or other services and unreliable as in "we get different sets of logical outputs when we provide the same input to a LLM". In the first, you can prepare for what are logical outcomes -- network failures, durability loss, etc. In the latter, unless you know the total space of outputs for a LLM you cannot prepare. In the operational sense, LLMs are not a system component, they are a system builder. And a rather poor one, at that.
> And the integration tests should 100% include times when the network flakes out and drops 1/2 of replies and corrupts msgs and the like.
Yeah, it's not that hard to include that in modern testing.
There are thousands of projects out there that use mocks for various reasons, some good, some bad, some ugly. But it doesn't matter: most engineers on those projects do not have the option to go another direction, they have to push forward.
In this context, why not refactor (and have your LLM of choice) write and optimize the integration tests for you? If the crux of the argument for LLMs is that it is capable of producing sufficient quality software and dramatically reduced costs, why not have it rewrite tests?
Parameterized tests are good, but I think he might be talking about exercising all the corner cases in the logic of your function, which to my knowledge almost no languages can auto-generate for but LLMs can sorta-ish figure it out.
We are talking about basic computing for CRUD apps. When you start needing to rely upon "sorta-ish" to describe the efficacy or a tool for such a straightforward and deterministic use case, it may be an indicator you need to rethink your approach.
If you want to discount a tool that may save you an immense amount of time because you might have to help it along the fast few feet, thats up to you.
If you can share a tool that can analyze a function and create a test for all corner cases in a popular language, I'm sure some people would be interested in that.
You should look up intellitest and reshaper test generator. Products exist for this.
We're not a licensed profession with universally defined roles. It's whatever the speaker wants it to be given how wildly it varies.
So how many developers in that survey are those?
They surveyed 791 developers (:D) and "a third of senior developers" do that. That's... generiously, what... 20 people?
It's amazing how everyone can massage numbers when they're trying to sell something.
The other thing they do is conveniently not mention all the negative stuff about AI that the source article mentions, they only report on the portion of content from the source that's in any way positive of AI.
And of course, its an article based on a source article based on a survey (of a single company), with the source article written by a "content marketing manager", and the raw data of the survey isn't released/published, only some marketing summary of what the results (supposedly) were. Very trustworthy.
“AI” is great for coding in the small, it’s like having a powerful semantic code editor, or pairing with a junior developer who can lookup some info online quickly. The hardest part of the job was never typing or figuring out some API bullshit anyway.
But trying to use it like “please write this entire feature for me” (what vibe coding is supposed to mean) is the wrong way to handle the tool IMO. It turns into a specification problem.
I find this half state kind of useless. If I have to know and understand the code being generated, it's easier to just write it myself. The AI tools can just spit out function names and tools I don't know off the top of my head, and the only way to check they are correct is to go look up the documentation, and at that point I've just done the hard work I wanted to avoid.
Feels like a similar situation to self driving where companies want to insist that you should be fully aware and ready to take over in an instant when things go wrong. That's just not how your brain works. You either want to fully disengage, or be actively doing the work.
> The AI tools can just spit out function names and tools I don't know off the top of my head, and the only way to check they are correct is to go look up the documentation, and at that point I've just done the hard work I wanted to avoid.
This is exactly my experience, but I guess generating code with depreciated methods is useful for some people.
It turns into a specification problem.
This, IMHO, is the critical point and why a lot of “deep” development work doesn’t benefit much from the current generation of AI tools.
Last week, I was dealing with some temporal data. I often find working in this area a little frustrating because you spend so much time dealing with the inherent traps and edge cases, so using an AI code generator is superficially attractive. However, the vast majority of my time wasn’t spent writing code, it was getting my head around what the various representations of certain time-based events in this system actually mean and what should happen when they interact. I probably wrote about 100 test cases next, each covering a distinct real world scenario, and working out how to parameterise them so the coverage was exhaustive for certain tricky interactions also required a bit of thought. Finally, I wrote the implementation of this algorithm that had a lot of essential complexity, which means code with lots of conditionals that needs to be crystal clear about why things are being done in a certain order and decisions made a certain way, so anyone reading it later has a fighting chance of understanding it. Which of those three stages would current AI tools really have helped with?
I find AI code generators can be quite helpful for low-level boilerplate stuff, where the required behaviour is obvious and the details tend to be a specific database schema or remote API spec. No doubt some applications consist almost entirely of this kind of code, and I can easily believe that people working on those find AI coding tools much more effective than I typically do. But as 'manoDev says in the parent comment, deeper work is often a specification problem. The valuable part is often figuring out the what and the why rather than the how, and so far that isn’t something AI has been very good at.
Yes, but in my experience actually no. At least not with the bleeding edge models today. I've been able to get LLMs to write whole features to the point that I'm quite surprised at the result. Perhaps I'm talking to it right (the new "holding it right"?). I tend to begin asking for an empty application with the characteristics I want (CLI, has subcommands, ...) then I ask it to add a simple feature. Get that working then ask it to enhance functionality progressively, testing as we go. Then when functionality is working I ask for a refactor (often it puts 1500 loc in one file, for example), doc, improve help text, and so on. Basically the same way you'd manage a human.
I've also been close to astonished at the capability LLMs have to draw conclusions from very large complex codebases. For example I wanted to understand the details of a distributed replication mechanism in a project that is enormous. Pre-LLM I'd spent a couple of days crawling through the code using grep and perhaps IDE tools, making notes on paper. I'd probably have to run the code or instrument it with logging then look at the results in a test deployment. But I've found I can ask the LLM to take a look at the p2p code and tell me how it works. Then ask it how the peer set is managed. I can ask it if all reachable peers are known at all nodes. It's almost better than me at this, and it's what I've done for a living for 30 years. Certainly it's very good for very low cost and effort. While it's chugging I can think about higher order things.
I say all this as a massive AI skeptic dating back to the 1980s.
> I tend to begin asking for an empty application with the characteristics I want (CLI, has subcommands, ...) then I ask it to add a simple feature.
That makes sense, as you're breaking the task into smaller achievable tasks. But it takes an already experienced developer to think like this.
Instead, a lot of people in the hype train are pretending an AI can work an idea to production from a "CEO level" of detail – that probably ain't happening.
> you're breaking the task into smaller achievable tasks.
this is the part that I would describe as engineering in the first place. This is the part that separates a script kiddie or someone who "knows" one language and can be somewhat dangerous with it, from someone who commands a $200k/year salary, and it is the important part
and so far there is no indication that language models can do this part at. all.
for someone who CAN do the part of breaking down a problem into smaller abstractions, though, some of these models can save you a little time, sometimes, in cases where it's less effort to type an explanation to the problem than it is to type the code directly..
which is to say.. sometimes.
All the hype is about asking an LLM to start with an empty project with loose requirements. Asking it to work on a million lines of legacy code (inadequately tested, as all legacy code) with ancient and complex contracts is a completely different experience.
Very large projects are an area where AI tools can really empower developers without replacing them.
It is very useful to be able to ask basic questions about the code that I am working on, without having to read through dozens of other source files. It frees up a lot of time to actually get stuff done.
AI makes so many mistakes, I cannot trust it with telling me the truth about how a large codebase works.
I looked at our anthropic bill this week. Saw that one of our best engineers was spending $300/day on Claude. Leadership was psyched about it.
I was told that I wasnt using it enough by one arm of the company and that I was spending too much by another.
Meanwhile, try as I might I couldnt prevent it from being useless.
I know of no better metaphor than that of what it's like being a developer in 2025.
Is he using it to code or just chatting up his AI girlfriend.
Claude is making $72k a year for a consistent $300/day spend.
Bear in mind those are revenue figures, they're costing claude hundreds a day.
One imagines Leadership won't be so pleased after the inevitably price hike (which, given the margins software uses, is going to be in the 1-3 thousands a day) and the hype wears off enough for them to realize they're spending a full salary automating a partial FTE.
But, by the looks of things, models will be more efficient by then and a cheaper-to-run model will produce comparable output. At least that's how it's been with OSS models, or with the Openai api model. So maybe the inevitable price hike (or rate limiting) may lead to switching models / providers and the results being just as good.
> But, by the looks of things, models will be more efficient by then and a cheaper-to-run model will produce comparable output
So far there's negative evidence of this. Things are getting more expensive for similar outputs.
Most of code that "needs to be written" is just a copy of something standard, "Do X in the simplest way possible" code that doesn't need optimizations, and writing it by hand is just waste of time. AI is good enough to write megabytes of that code, since its statistically common and part of tons of codebases. Its the other half of code that AI can't handle, that you need to manually verify it doesn't hallucinate fantastic stuff that manages to compile, but doesn't work.
Naive question but wouldn't it could as having AI write 50%+ of your code if you just use an unintelligent complete-the-line AI tool? In this case the AI is hardly doing anything intelligent, but is still getting credit for doing most of the work.
Yes there is even a small business that champions an small LLM that is trained on the language Lsp, your code base, and your recent coding history (not necessarily commit, but any time one presses ctrl + s). How it works is essentially autocomplete. This functionality is packaged as an IDE plugin: TabNine
However now they try to sell subscriptions to LLMs.
Tabnine has been in the scene since at least 2018.
At a minimum, 30-50% of bogus surveys are bogus but I'm willing to bet it's a lot more.
Article did not say what kind languages/applications thosd 791 developers were working on. I work on a legacy Java code base (which looks more like C than Java, thankfully) and I cant imagine AI doing any of it. It can do small isolated, well formulated chunks (functions that do a very specific task) but even that will require very verbose explanation.
I just can't fathom shipping a big percentage of work using LLMs.
This is self reported unless I missed something. I bet that skews these results quite a bit. Many are very hesitant to say they use AI, and I suspect that's much more likely to be the case when you are new to the field.
Also, green coding? That's new to me. I guess we'll see optional carbon offset purchasing in our subs soon.
Breaking news: 1/3 of devs use autocomplete. Breaking news: autocomplete now autocompletes more code.
One thing I don't hear a lot of people talk about is building prototypes. That's where I see a gigantic time savings. It doesn't have to be beautiful code, it just has to help me answer a question so I can make a decision about where to go next. That and tools. There have been many times where I've wanted to build a task-specific tool but justifying the time would be hard. Now I can create little tools like that, and it's a huge productivity boost.
Brute forcing a problem by writing more lines with an LLM instead of designing better code is a step the wrong direction
I guess I’m an older developer.
But I’ve come full circle and have gone back to hand coding after a couple years of fighting LLMs. I’m tired of coaxing their style and fixing their bugs - some of which are just really dumb and some are devious.
Artisanal hand craft for me!
I'm in the same exact boat. I started with a lot of different tools but eventually went back to hand coding everything. When using tools like co-pilot I noticed I would ship a lot mode dumb mistakes. I even experimented with not even using a chat interface and it turns out that a lot of answers to problems are indeed found with a web search.
I've also just turned off copilot now. I had several cases where bugs in the generated code slipped through and ended up deployed. Bugs I never would have written myself. Reviewing code properly is so much harder than writing it from scratch.
By all means, if my goal is actually crafting anything.
Usually it isn't, though - I just want to pump out code changes ASAP (but not sooner).
Even then I’ve mostly given up. I’ve seen LLMs change from snake case to camel case for a single method and leave the rest untouched. I’ve seen them completely fabricate APIs to non existent libraries. I’ve seen them get mathematical formulae completely wrong. I’ve seen it make entire methods for things that are builtins of a library I’m already using.
It’s just not worth it anymore for anything that is part of an actual product.
Occasionally I will still churn out little scripts or methods from scratch that are low risk - but anything that gets to prod is pretty much hand coded again.
This changed my experience significantly:
https://github.com/BeehiveInnovations/zen-mcp-server/blob/ma...
It basically uses multiple different LLMs from different providers to debate a change or code review. Opus 4.1, Gemini 2.5 Pro, and GPT-5 all have a go at it before it writes out plans or makes changes.
The article is saying older devs vibe code: I think you misunderstood
(Article was https://www.theregister.com/2025/08/28/older_developers_ai_c... when this was posted; we've since changed it)
I didn’t misunderstand. I tried to vibe code, and now I don’t. Not sure how you misinterpreted that.
Key word: "But"
I can be massively more ambitious when coding with AI, but most importantly I have zero emotional investment in the code so I can throw it away and start again whenever I want.
I would certainly hope junior developers don't rely too much on AI; they need the opportunity to learn to do this stuff themselves.
I tried it - didn't like it. Had an LLM work on a backup script since I don't use Bash very often. Took a bunch of learning the quirks of bash to get the code working properly.
While I'll say it got me started, it wasn't a snap of the fingers and a quick debug to get something done. Took me quite a while to figure out why something worked but really it didn't (LLM using command line commands where Bash doesn't interpret the results the same).
If its something I know, probably wont use LLM (as it doesn't do my style). If it's something I don't know, might use it to get me started but I expect that's all I'll it for.
Can I ask which agent/model you used? I'm similarly irritated with shell script coding, but find I have to make scripts fairly often. My experience using various models but latterly Claude Code has been quite different -- it churned out pretty much what I was looking for. Also old, fwiw. I'm older than all shells.
It was Gemini, I picked the point of least resistance. I've heard Claude does better but haven't looked at it.
I think they're being really loose with the term "vibe coding", and what they really mean is AI-assisted coding.
Older devs are not letting the AI do everything for them. Assuming they're like me, the planning is mostly done by a human, while the coding is largely done by the AI, but in small sections with the human giving specific instructions.
Then there's debugging, which I don't really trust the AI to do very well. Too many times I've seen it miss the real problem, then try to rewrite large sections of the code unnecessarily. I do most of the debugging myself, with some assistance from the AI.
They're also being really loose with the term "older developers" by describing it as anybody with more than ten years of experience.
> Assuming they're like me, the planning is mostly done by a human, while the coding is largely done by the AI
I've largely settled on the opposite. AI has become very good at planning what to do and explaining it in plain English, but its command of programming languages still leaves a lot to be desired.
It's good at checking plans, and helping with plans, but I've seen it make really really bad choices. I don't think it can replace a human architect.
Yes, much like many of the humans I have worked with, sometimes bad choices are introduced. But those bad choices are caught during the writing of the code, so that's not really that big of a deal when it does happen. It is still a boon to have it do most of the work.
And remains markably better than when AI makes bad choices while writing code. That is much harder to catch and requires pouring over the code with a fine tooth comb to the point that you may as well have just written it yourself, negating all the potential benefits of using it to generate code in the first place.
It can't replace a human anything, yet, but that doesn't seem to be stopping anyone from trying unfortunately:(
When debugging, I'll coax the AI to determine what went wrong first - to my satisfaction - and have it go from there. Otherwise it's a descent into madness.
That doesn't worry me much. Whats way more concerning is junior/middle developers have likely way more than 1/2 of their code "AI" generated.
At least seniour developer allegedly know how to error-proof their code to some extend.
I've been at this for many years. If I want to implement a new feature that ties together various systems and delivers an expected output, I know the general steps that I need to take. About 80% of those steps are creating and stubbing out new files with the general methods and objects I know will be needed, and all the test cases. So... I could either spend the next 4 hours doing that, or spend 3 minutes filling out a CLAUDE.md with the specs and 5 minutes having Claude do it (and fairly well).
I feel no shame in doing the later. I've also learned enough about LLMs that I know how to write that CLAUDE.md so it sticks to best practices. YMMV.
> I've also learned enough about LLMs that I know how to write that CLAUDE.md so it sticks to best practices.
Could you share some examples / tips about this?
I think at this point it's whoever can get the most useful work out of AI which is actually really hard due to their 'incomplete' state. Finding uses which require very little user input is going to be the next big thing in my opinion since it seems that LLMs are currently at a wall where they require technical advancements before they can overcome it.
I’m glad to see people still employed… I remember what that was like.
I often use LLMs for method level implementation work. Anything beyond the scope of a single function call I have very little confidence in. This is OK though, since everything is a function and I can perfectly control the blast radius as long as I keep my hands on the steering wheel. I don't ever let the LLM define method signatures for me.
If I don't know how to structure functions around a problem, I will also use the LLM, but I am asking it to write zero code in this case. I am just having a conversation about what would be good paths to consider.
Been in the industry professionally since 1996 writing code and before that 10 years as a hobbyist between a little BASIC, C and a lot of assembly (65C02, 68k, PPC and x86).
In the last 6 months, when I have had an assignment that involved coding, AI has generated 100% of my code. I just described the abstractions I wanted and reusable modules/classes I needed and built on it.
Claude writes 99% of my code, I’m just a manager and architect and QC now.
What sort of coding do you do? I can't imagine getting that level of code from AI
I use the inline commands to convert pseudo code to normal code. It works great.
Is "vibe coding" synonymous with using AI code-generation tools now?
I thought vibe coding meant very little direct interaction with the code, mostly telling the LLM what you want and iterating using the LLM. Which is fun and worth trying, but probably not a valid professional tool.
I think what happened is that a lot of people started dismissing all LLM code creation as "vibe coding" because those people were anti-LLM, and so the term itself became an easy umbrella pejorative.
And then, more people saw these critics using "vibe coding" to refer to all LLM code creation, and naturally understood it to mean exactly that. Which means the recent articles we've seen about how good vibe coding starts with a requirements file, then tests that fail, then tests that pass, etc.
Like so many terms that started out being used pejoratively, vibe coding got reclaimed. And it just sounds cool.
Also because we don't really have any other good memorable term for describing code built entirely with LLM's from the ground up, separate from mere autocomplete AI or using LLM's to work on established codebases.
“Agentic coding” is probably more accurate, though many people (fairly) find the term “Agentic” to be buzz-wordy and obnoxious.
I’m willing to vibe code a spike project. That is to say, I want to see how well some new tool or library works, so I’ll tell the LLM to build a proof of concept, and then I’ll study that and see how I feel about it. Then I throw it away and build the real version with more care and attention.
I have "vibe coded" a few internal tools now that are very low risk in terms of negative business impact but nonetheless valuable for our team's efficiency.
E.g one tool packages a debug build of an iOS simulator app with various metadata and uploads it to a specified location.
Another tool spits out my team's github velocity metrics.
These were relatively small scripting apps, that yes, I code reviewed and checked for security issues.
I don't see why this wouldn't be a valid professional tool? It's working well, saves me time, is fun, and safe (assuming proper code review, and LLM tool usage).
With these little scripts it creates it's actually pretty quick to validate their safety and efficacy. They're like validating NP problems.
The original definition of vibe coding meant that you just let the agent write everything, and if it works then you commit it. Your code review and security check turned this from vibe coding into something else.
This is complicated by the fact that some people use “vibe coding” to mean any kind of LLM-assisted coding.
Yeah, for some reason the term has been used interchangeably for a while, which is making it very hard to have a conversation about it since many people think vibe coding is just using AI to assist you.
From Karpathy's original post I understood it to be what you're describing. It is getting confusing.
The term sounds funny and quirky, so got overused. Also simply the term pushes emotional buttons on a lot of people so it's good for clickbait.
My personal definition of "vibe coding" is when a developer delegates -- abdicates, really -- responsibility for understanding & testing what AI-generated code is doing and/or how that result is achieved. I consider it something that's separate from & inferior to using AI as a development tool.
I think there is actually a pressure to show thst you are using AI (stories of ceos firing employees who supposedly did not "embrace" ai). So people are over attributing to AI. Though originally VC was meant to be infinite monkey style button smashing, people are attributing to VC just to avoid the cross hairs.
Jesus, that's a staggering figure to me coming from senior developers. I guess I'm the odd one out here, but ChatGPT is nothing more than an index of Stack Overflow (and friends) for me. It's essentially replaced Googling, but once I get the answer I need I'm still just slinging code like an asshole. Copying the output wholesale from any of these LLMs just seems crazy to me.
Really?
I'm not a coder but a sysadmin. 35 years or so. I'm conversant with Perl, Python, (nods to C), BASIC, shell, Powershell, AutoIT (int al)
I muck about with CAD - OpenSCAD, FreeCAD, and 3D printing.
I'm not a senior developer - I pay them.
LLMs are handy in the same way I still have my slide rules and calculators (OK kids I use a calc app) but I do still have my slide rules.
ChatGPT does quite well with the basics for a simple OpenSCAD effort but invents functions within libraries. That is to be expected - its a next token decider function and not a real AI.
I find it handy for basics, very basic.
I just got back into OpenSCAD after recently getting my first new 3D Printer in 10 years, so I basically had to relearn it. ChatGPT got the syntax wrong for the most basic of operations.
Developers are lazy. Anything that makes development faster or easier is going to be welcomed by a good developer.
If you find it is quicker not to use it then you might hate it, but I think it is probably better in some cases and worse in other cases.
as a developer my first priority is whether the software works, not whether it is fast or easy to develop
I think we can assume that what daft_pink means by "development" includes that the software works.
("Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize." - https://news.ycombinator.com/newsguidelines.html)
> Anything that makes development faster or easier is going to be welcomed by a good developer.
I strongly disagree. Struggling with a problem creates expertise. Struggle is slow, and it's hard. Good developers welcome it.
> Struggling with a problem creates expertise. Struggle is slow, and it's hard. Good developers welcome it.
There is significant evidence that shows mixed results for struggle-based learning - it’s highly individualized and has to be calibrated carefully: https://consensus.app/search/challenge-based-learning-outcom...
Indeed. This is my biggest fear for engineers as a whole. LLMs can be a great productivity boost in the very short term, but can so easily be abused. If you build a product with it, suddenly everyone is an engineering manager and no one is an expert on it. And growth as an engineer is stunted. It reminds me of abusing energy drinks or grinding to the point of burnout... But worse.
I think we'll find a middle ground though. I just think it hasn't happened yet. I'm cautiously optimistic.
“I don't know half of you half as well as I should like; and I like less than half of you half as well as you deserve.”
Of course. We are the most well equipped to run with it. Others will quickly create a sloppy mess while wise developers can keep the beast tame.
> survey of 791 developers
We have got to stop. In a universe of well over 25 million programmers a sample of 791 is not significant enough to justify such headlines.
We’ve got to do better than this, whatever this is.
Validity of the sample size is not determined by its fraction of the whole population. I don't know the formulas and I'm not a statistician. Maybe someone can drop some citations.
I generally agree with this just from a perspective of personal sentiment, it does feel wrong.
But statistically speaking, at a 95% confidence level you'd be within a +/- 3.5% margin of error given the 791 sample size, irrespective of whether the population is 30k or 30M.
You should read more about statistical significance. Under some reasonable assumptions, you can confidently certain deduce things with small sample sizes.
From another perspective: we've deduced a lot of things about how atoms work without any given experiment inspecting more than an insignificant fraction of all atoms.
TL;DR: The population size (25e6 total devs, 1e80 atoms in observable universe) is almost entirely irrelevant to hypothesis testing.
LMFTFY: a third of senior developers who answer surveys say over half of their code is AI-generated
Haha right. I would imagine a "Senior Developer" who is super into AI assisted coding would be more likely to come across this survey and want to participate.
It's 80% at least for me. I've hit a groove for sure. It's not only 80% is very well tested, and almost exactly how I would have preferred. Big tips are to design your CLAUDE.md files how you would actually code from a high level perspective, and not try the usual "You are an expert Google Distributed Engineer" and all that embarassing AI hype bro crap.
I’m still yet to find a use-case for AI-generated code in my workflow.
Even when I am building tools that heavily utilize modern AI, I haven’t found it. Recently, I disabled the AI-powered code completion in my IDE because I found that the cognitive load required to evaluate the suggestions it provided was greater and more time consuming than just writing the code I was already going to write anyways.
I don’t know if this is an experience thing or not - I mainly work on a tech stack I have over a decade of experience in - but I just don’t see it.
Others have suggested generating tests with AI but I find that horrifying. Tests are the one thing you should be the most anal about accuracy on compared to anything else in your codebase.
Apparently vibe coding now just means ai assisted coding beyond immediate code completion?
For me, success with LLM-assisted coding comes when I have a clear idea of what I want to accomplish and can express it clearly in a prompt. The relevant key business and technical concerns come into play, including complexities like balancing somewhat conflicting shorter and longer term concerns.
Juniors are probably all going to have to be learning this kind of stuff at an accelerated rate now (we don't need em cranking out REST endpoints or whatever anymore), but at this point this takes a senior perspective and senior skills.
Anyone can get an LLM and agentic tool to crank out code now. But you really need to have them crank out code to do something useful.
Sounds about right in my experience. Not every piece of code has to be elite John Carmack tier quality
Fastly is in the AI business like Cloudflare:
https://www.fastly.com/products/ai
https://www.fastly.com/products/fastly-ai-bot-management
https://www.fastly.com/documentation/guides/compute/about-th...
90% of my code is auto generated using custom code generation tools. No AI needed.
Im not sure I believe this. It's the exact opposite in my experience - the young'uns are all over vibe coding.
around a third of senior developers with more than a decade of experience are using AI code-generation tools such as Copilot, Claude, and Gemini to produce over half of their finished software, compared to 13 percent for those devs who've only been on the job for up to two years.
A third? I would expect at least a majority based on the headline and tone of the article... Isn't this saying 66% are down on vibe coding?
(Article was https://www.theregister.com/2025/08/28/older_developers_ai_c... when this was posted; we've since changed it. We also changed the title.)
I guess the article was vibe coded.
"This one developer was down with the vibe coding."
So many words to say nothing. Maybe it wan generated by an AI tool?