At $WORK, we have a bot that integrates with Slack that sets up minor PRs. Adjusting tf, updating endpoints, adding simple handlers. It does pretty well.
Also in a case of just prose to code, Claude wrote up a concurrent data migration utility in Go. When I reviewed it, it wasn't managing goroutines or waitgroups well, and the whole thing was a buggy mess and could not be gracefully killed. I would have written it faster by hand, no doubt. I think I know more now and the calculus may be shifting on my AI usage. However, the following day, my colleague needed a nearly identical temporary tool. A 45 minute session with Claude of "copy this thing but do this other stuff" easily saved them 6-8 hours of work. And again, that was just talking with Claude.
I am doing a hybrid approach really. I write much of my scaffolding, I write example code, I modify quick things the ai made to be more like I want, I set up guard rails and some tests then have the ai go to town. Results are mixed but trending up still.
FWIW, our CEO has declared us to be AI-first, so we are to leverage AI in everything we do which I think is misguided. But you can bet they will be reviewing AI usage metrics and lower wont be better at $WORK.
> it wasn't managing goroutines or waitgroups well, and the whole thing was a buggy mess and could not be gracefully killed
First pass on a greenfield project is often like that, for humans too I suppose. Once the MVP is up, refactor with Opus ultrathink to look for areas of weakness and improvement usually tightens things up.
Then as you pointed out, once you have solid scaffolding, examples, etc, things keep improving. I feel like Claude has a pretty strong bias for following existing patterns in the project.
World class? Then what am I? I frequently work with Copilot and Claude Sonnet, and it can be useful, but trusting it to write code for anything moderately complicated is a bad idea. I am impressed by its ability to generate and analyse code, but its code almost never works the first time, unless it's trivial boilerplate stuff, and its analysis is wrong half the time.
It's very useful if you have the knowledge and experience to tell when it's wrong. That is the absolutely vital skill to work with these systems. In the right circumstances, they can work miracles in a very short time. But if they're wrong, they can easily waste hours or more following the wrong track.
It's fast, it's very well-read, and it's sometimes correct. That's my analysis of it.
> Including how it looks at the surrounding code and patterns.
Citation needed. Even with specific examples, “follow the patterns from the existing tests”, etc copilot (gpt 5) still insists on generating tests using the wrong methods (“describe” and “it” in a codebase that uses “suite” and “test”).
An intern, even an intern with a severe cognitive disability, would not be so bad at pattern following.
They don’t. I’ve gone from rickety and slow excel sheets and maybe some python functions to automate small things that I can figure out to building out entire data pipelines. It’s incredible how much more efficient we’ve gotten.
Part of the issue is that I think you are underestimating the number of people not doing "advanced" programming. If it's around ~80-90%, then that's a lot of +1s for AI
I'm on a team like that and I see it happening in more and more companies around. Maybe "many" does a heavy lifting in the quoted text, but it is definitely happening.
If by "code" you mean machine code, it's been true for decades now. Of course the instructions on how to auto-generate the machine code you want must be quite complex and highly precise if you want directly usable results, but the tech works very well. It's called Automatic Programming and is considered a subset of Artificial Intelligence (AI).
I only write around 5% of the code I ship, maybe less. For some reason when I make this statement a lot of people sweep in to tell me I am an idiot or lying, but I really have no reason to lie (and I don't think I'm an idiot!). I have 10+ years of experience as an SWE, I work at a Series C startup in SF, and we do XXMM ARR. I do thoroughly audit all the code that AI writes, and often go through multiple iterations, so it's a bit of a more complex picture, but if you were to simply say "a developer is not writing the code", it would be an accurate statement.
Though I do think "advanced software team" is kind of an absurd phrase, and I don't think there is any correlation with how "advanced" the software you build is and how much you need AI. In fact, there's probably an anti-correlation: I think that I get such great use out of AI primarily because we don't need to write particularly difficult code, but we do need to write a lot of it. I spend a lot of time in React, which AI is very well-suited to.
EDIT: I'd love to hear from people who disagree with me or think I am off-base somehow about which particular part of my comment (or follow-up comment https://news.ycombinator.com/item?id=46222640) seems wrong. I'm particularly curious why when I say I use Rust and code faster everyone is fine with that, but saying that I use AI and code faster is an extremely contentious statement.
>I only write around 5% of the code I ship, maybe less.
>I do thoroughly audit all the code that AI writes, and often go through multiple iterations
Does this actually save you time versus writing most of the code yourself? In general, it's a lot harder to read and grok code than to write it [0, 1, 2, 3]. For me, one of the biggest skills for using AI to efficiently write code is a) chunking the task into increments that are both small enough for me to easily grok the AI-generated code and also aligned enough to the AI's training data for its output to be ~100% correct, b) correctly predicting ahead of time whether reviewing/correcting the output for each increment will take longer than just doing it myself, and c) ensuring that the overhead of a) and b) doesn't exceed just doing it myself.
Yes, I save an incredible amount of time. I suspect I’m likely 5-10x more productive, though it depends exactly what I’m working on. Most of the issues that you cite can be solved, though it requires you to rewire the programming part of your brain to work with this new paradigm.
To be honest, I don’t really have a problem with chunking my tasks. The reason I don’t is because I don’t really think about it that way. I care a lot more about chunks and AI could reasonably validate. Instead of thinking “what’s the biggest chunk I could reasonably ask AI to solve” I think “what’s the biggest piece I could ask an AI to do that I can write a script to easily validate once it’s done?” Allowing the AI to validate its own work means you never have to worry about chunking again. (OK, that's a slight hyperbole, but the validation is most of my concern, and a secondary concern is that I try not to let it go for more than 1000 lines.)
For instance, take the example of an AI rewriting an API call to support a new db library you are migrating to. In this case, it’s easy to write a test case for the AI. Just run a bunch of cURLs on the existing endpoint that exercise the existing behavior (surely you already have these because you’re working in a code base that’s well tested, right? right?!?), and then make a script that verifies that the result of those cURLs has not changed. Now, instruct the AI to ensure it runs that script and doesn’t stop until the results are character for character identical. That will almost always get you something working.
Obviously the tactics change based on what you are working on. In frontend code, for example, I use a lot of Playwright. You get the idea.
As for code legibility, I tend to solve that by telling the AI to focus particularly on clean interfaces, and being OK with the internals of those interfaces be vibecoded and a little messy, so long as the external interface is crisp and well-tested. This is another very long discussion, and for the non-vibe-code-pilled (sorry), it probably sounds insane, and I feel it's easy to lose one's audience on such a polarizing topic, so I'll keep it brief. In short, one real key thing to understand about AI is that it makes the cost of writing unit tests and e2e tests drop significantly, and I find this (along with remaining disciplined and having crisp interfaces) to be an excellent tool in the fight against the increased code complexity that AI tools bring. So, in short, I deal with legibility by having a few really really clean interfaces/APIs that are extremely readable, and then testing them like crazy.
EDIT
There is a dead comment that I can't respond to that claims that I am not a reliable narrator because I have no A/B test. Behold, though: I am the AI-hater's nightmare, because I do have a good A/B test! I have a website that sees a decent amount of traffic (https://chipscompo.com/). Over the last few years, I have tried a few times to modernize and redesign the website, but these attempts have always failed because the website is pretty big (~50k loc) and I haven't been able to fit it in a single week of PTO.
This Thanksgiving, I took another crack at it with Claude Code, and not only did I finish an entire redesign (basically touched every line of frontend code), but I also got in a bunch of other new features, too, like a forgot password feature, and a suite of moderation tools. I then IaC'd the whole thing with Terraform, something I only dreamed about doing before AI! Then I bumped React a few majors versions, bumped TS about 10 years, etc, all with the help of AI. The new site is live and everyone seems to like it (well, they haven't left yet...).
If anything, this is actually an unfair comparison, because it was more work for the AI than it was for me when I tried a few years ago, because because my dependencies became more and more out of date as the years went on! This was actually a pain for AI, but I eventually managed to solve it.
Did you do 5-10 years of work in the year after you adopted AI? If you started after AI came in to existence 3 years ago (/s) you should have achieved 30 years of work output - a whole career of work.
I think AI only "got good" around the release of Claude Code + Opus 4.0, which was around March of this year. And it's not like I sit down and code 8 hours a day 5 days a week. I put on my pants one leg at a time -- there's a lot of other inefficiencies in the process, like meetings, alignment, etc, etc.
But yes, I do think that the efficiency gain, purely in the domain of coding, is around 5x, which is why I was able to entirely redesign my website in a week. When working on personal projects I don't need to worry about stakeholders at all.
a) is exactly what AI is good at. b) is a waste of time: why would you waste your precious time trying to predict a result when you can just get the result and see.
You are stuck in a very low local maximum.
You are me six months ago. You don’t know how it works, so you cannot yet reason about it. Unlike me, you’ve decided “all these other people who say it’s effective are making it up”. Instead ask, how does it work? What am I missing.
I regularly try to use various AI tools and I can imagine it is very easy for it to produce 95% of your code. I can also imagine you have 90% more code than you would have had you written it yourself. That’s not necessarily a bad thing, code is a means to an end, and if your business is happy with the outcomes, great, but I’m not sure percentages of code are particularly meaningful.
Every time I try to use AI it produces endless code that I would never have written. I’ve tried updating my instructions to use established dependencies when possible but it seems completely averse.
An argument could be made that a million lines isn’t a problem now that these machines can consume and keep all the context in memory — maybe machines producing concise code is asking for faster horses.
Everyone is doing this extreme pearl clutching around the specific wording. Yeah, it's not 100% accurate for many reasons, but the broader point was about employment effects, it doesn't need to completely replace every single developer to have a sizable impact. Sure, it's not there yet and it's not particularly close, but can you be certain that it will never be there?
They’re not lying when they say they have AI write their code, so it’s not just promotion. They will thrive or die from this thesis. If present YC portfolio companies underperform the market in 5-10 years, that’s a strong signal for AI skeptics. If they overperform, that’s a strong signal that AI skeptics were wrong.
3. You are absolutely right. New startups have greenfield projects that are in-distribution for AI. This gives them faster iteration speed. This means new companies have a structural advantage over older companies, and I expect them to grow faster than tech startups that don’t do this.
Plenty of legacy codebases will stick around, for the same reasons they always do: once you’ve solved a problem, the worst thing you can do is rewrite your solution to a new architecture with a better devex. My prediction: if you want to keep the code writing and office culture of the 2010s, get a job internally at cloud computing companies (AWS, GCP, etc). High reliability systems have less to gain from iteration speed. That’s why airlines and banks maintain their mainframes.
They do. Where did you get this? All the providers have clauses like this:
"4.1. Generally. Customer and Customer’s End Users may provide Input and receive Output. As between Customer and OpenAI, to the extent permitted by applicable law, Customer: (a) retains all ownership rights in Input; and (b) owns all Output. OpenAI hereby assigns to Customer all OpenAI’s right, title, and interest, if any, in and to Output."
The outputs of AI are most likely in the public domain. As automated process output are public domain, and the companies claim fair use when scraping, making the input unencumbered, too.
It wouldn't be OpenAI holding copyright - it would be no one holding copyright.
I wrote 4000 lines of Rust code with Codex - a high throughput websocket data collector.
Spoiler: I do not know Rust at all. I discussed possible architectures with GPT/Gemini/Grok (sync/async, data flow, storage options, ...), refined a design and then it was all implemented with agents.
I would be interested in a web series (podcast or video) where people who do not know a language create something with AI. Then somebody with experience building in that technology reviews the code and gives feedback on it.
I am personally progressing to a point where I wonder if it even matters what the code looks like if it passes functional and unit tests. Do patterns matter if humans are not going to write and edit the code? Maybe sometimes. Maybe not other times.
I'm on a team like this currently. It's great when everyone knows how to use the tools and spot/kill slop and bad context. Generally speaking, good code gets merged and MUCH more quickly than in the past.
The post script was pretty sobering. It's kind of the first time in my life that I've been actively hoping for a technology to out right not deliver on its promise. This is a pretty depressing place to be, because most emerging technologies provide us with exciting new possibilities whereas this technology seems only exciting for management stressed about payroll.
It's true that the technology currently works as an excellent information gathering tool (which I am happy to be excited about) but that doesn't seem to be the promise at this point, the promise is about replacing human creativity with artificial creativity which.. is certainly new and unwelcome.
> It's kind of the first time in my life that I've been actively hoping for a technology to out right not deliver on its promise.
Same here, and I think it's because I feel like a craftsman. I thoroughly enjoy the process of thinking deeply about what I will build, breaking down the work into related chunks, and of course writing the code itself. It's like magic when it all comes together. Sometimes I can't even believe I get to do it!
I've spent over a decade learning an elegant language that allows me to instruct a computer—and the computer does exactly what I tell it. It's a miracle! I don't want to abandon this language. I don't want to describe things to the computer in English, then stare at a spinner for three minutes while the computer tries to churn out code.
I never knew there was an entire subclass of people in my field who don't want to write code.
Maybe post renaissance many artists no longer had patrons, but nothing was stopping them from painting.
If your industry truely is going in the direction where there's no paid work for you to code (which is unlikely in my opinion), nobody is stopping you. It's easier than ever, you have decades of personal computing at your fingertips.
Most people with a thing they love do it as a hobby, not a job. Maybe you've had it good for a long time?
I also love to code, though it's not what people pay to do anymore.
You should never hope for a technology to not deliver on its promise. Sooner or later it usually does. The question is, does it happen in two years or a hundred years? My motto: don't predict, prepare.
I'm quite ok with only writing code in my personal time. In fact, if I could solve the problems there faster, I'd be delighted.
Instead, I've reacted to the article from the opposite direction. All those grand claims about stuff this tech doesn't do and can't do. All that trying to validate the investment as rational when it's absolutely obvious it's at least 2 orders of magnitude larger than any arguably rational value.
It's been blowing my mind reading HN the past year or so and seeing so many comments from programmers that are excited to not have to write code. It's depressing.
Why is AI limited to just a raw LLM. Scaffolding, RL, multi-modal... so many techniques which can be applied. METR has shown AI's time horizon for staying on task is doubling every 7 months or less.
Because all the money has been going into LLMs and "inference machines" (what a non-descriptive name). So when an investor says "AI", that's what they mean.
A lot of the debate here swings between extremes. Claims like “AI writes most of the code now” are obviously exaggerated especially coming from a nontechnical author but acting like any use of AI is a red flag is just as unrealistic. Early stage teams do lean on LLMs for scaffolding, tests and boilerplate, but the hard engineering work is still human. Is there a bubble? Sure, valuations look frothy. But like the dotcom era, a correction doesn’t invalidate the underlying shift it just clears out the noise. The hype is inflated, the technology is real.
I've enjoyed Howard Marks writing/thinking in the past, but this is clearly a person who thinks they understand the topic but doesn't have the slightest clue. Someone trying to be relevant/engaged before really thinking on what is fact vs. fiction.
There is probably not any large market for fusion power as we conceive of it today.
You will get a different result if you revolutionize some related area (like making an extremely capably superconductor), or if you open up some market that can't use the cheapest alternatives (like deep space asteroid mining). But neither of those options can go together with "oh, and we will achieve energy positive fusion" in a startup business plan.
There are a lot of areas that could use more investment but aren't getting it. The way this works is complicated. The best explanation comes from really understanding Moore's Law. The main effect of the law was really about investment, about securing investment into semiconductor fabs rather than anywhere else.
See, every fab costs double what the previous generation did (current ones run roughly 20 gigadollars per factory). And you need to build a new fab every couple of years. But, if you can keep your order book full, you can make a profit on that fab- you can get good ROI on the investment and pay the money people back nicely. But you need to go to the markets to raise money for that next generation fab because it costs twice what your previous generation did and you didn't get that much free cash from your previous generation. And the money men wouldn't want to give it to you, of course. But thanks to Moore's Law you can pitch it as inevitable, if you don't borrow the money to build the new fab, then your competitors will. And so they would give you the money for the new fab because it says right on this paper that in another two years the transistors will double.
Right now, that "it's inevitable, our competitors will get there if we don't" argument works on VCs if you are pitching LLM's or LLM based things. And it doesn't work as well if you are pitching battery technology, fusion power, or other areas. And that's why the investments are going to AI.
Impressive that you have have that many assets under management and still not show a clear understanding of an industry you're prognosticating on. The author doesn't talk at all about the hardware aspect of this stuff such as the surprisingly short lifetime of the GPUs that are being rolled out at a break-neck pace. The recommendation that you take a moderate investment position and not overdo it could be shared without as much needless thinking out loud, and doesn't bring anything new to the conversation. Kind of like every other AI offering out there, if you think about it -- participating in something you don't understand because of FOMO.
> The author doesn't talk at all about the hardware aspect of this stuff such as the surprisingly short lifetime of the GPUs that are being rolled out at a break-neck pace.
There is literally a section that begins, "What will be the useful life of AI assets?" In bold.
> The author doesn't talk at all about the hardware aspect of this stuff such as the surprisingly short lifetime of the GPUs that are being rolled out at a break-neck pace.
I am not sure why that is interesting. Nobody thinks of these chips as long term assets that they are investing in. Cloud providers have always amortized their computers over ~5 years. It would be very surprising if AI companies were doing much different -- maybe even a shorter time line.
This thread is just full of people discussing why industrial looms are bad. The factory owners don’t think looms are bad. You can either learn how to be useful in the new factory or you can start throwing shoes.
I think this gives an excellent framework for how to think of this. Is it a bubble? Who knows is a perfectly valid answer.
I do think there’s something quite ironic that one of the frequent criticisms of LLMs are that they can’t really say “I don’t know”. Yet if someone says that they get criticised. No surprises that our tools are the same.
The problem is that people conflate the current wave of transformer based ANNs with AI (as a whole). AI certainly has the potential to disrupt employment of humans. Transformers as they exist today not so much.
AI's potential isn't defined by the potential of the current crop of transformers. However, many people seem to think otherwise and this will be incredibly damaging for AI as a whole once transformer tech investment all but dries out.
It's a recurring phenomena, c.f. "AI winter" and the cycle before and after.
We're too easily fooled by our mistaken models of the problem, it's difficulty, and what constitutes progress, so are perpetually fooled by the latest, greatest "ladder to the moon" effort.
Ai is currently a bubble. But that is just a short term phenomenon. Ultimately what AI currently is and what the trend-line indicates what AI will become will change the economy in ways that will dwarf the current bubble.
But this is only if the trend-line keeps going, which is a likely possibility given the last couple of years.
I think people are making the mistake that AI is a bubble and therefore AI is completely bullshit. Remember: The internet was a bubble. It ended up changing world.
Yes, a bubble just means that it's over-valued and that at some point there will be a significant correction in stock values. It doesn't mean that the thing is inherently worthless.
But also, a lot of the dot com companies that people invested in in 1999 went bust, meaning those specific investments went to zero even if the web as a whole was a huge success financially.
I started working in 1997 and lived through the dot com bubble and collapse. My advice to people is to diversify away from your company stock. I knew a lot of people at Cisco that had stock options at $80 and it dropped to under $20.
Because of the way the AMT (Alternative Minimum Tax) worked at the time they bought the stock, did not sell, but owed taxes on the gain on the day of purchase. They had tax bills of over $1 million but even if they sold it all they couldn't pay the bill. This dragged on for years.
Google said the dotcom bubble is roughly from 1995 to 2001. That's about 6 years. ChatGPT was released in 2022. Claude AI was released in 2023. DeepSeek was released in 2023.
Let's just say the AI bubble started in 2023. We still have about 3 years, more or less, until the AI bubble pops.
I do believe we are in the build out phase of the AI bubble, much like the dotcom bubble, where Cisco routers, Sun Microsystems servers... etc. sold like hotcakes to build up the foundation of the dotcom bubble
>I find the resulting outlook for employment terrifying. I am enormously concerned about what will happen to the people whose jobs AI renders unnecessary, or who can’t find jobs because of it. The optimists argue that “new jobs have always materialized after past technological advances.” I hope that’ll hold true in the case of AI, but hope isn’t much to hang one’s hat on, and I have trouble figuring out where those jobs will come from. Of course, I’m not much of a futurist or a financial optimist, and that’s why it’s a good thing I shifted from equities to bonds in 1978.
It's no wonder that the "AI optimists", unless very tendentious, try to focus more on "not needing to work because you'll get free stuff" rather than "you'll be able to exchange your labor for goods".
Of course it's a bubble. Valuations are propped up by speculative spending and AI seems unable to make enough profit to make back the continued spending.
Now, that's not to say AI isn't useful and we won't have AGI in the future. But this feels alot like the AI winter. Valuations will crash, a bunch of players will disappear, but we'll keep using the tech for boring things and eventually we'll have another breakthrough.
That is like telling others who experience natural disasters to wait until it happens and then ask themselves "How much damage will it bring" and then someone else tells them that it costed them everything.
Anyone who has lived through the dotcom bubble knows that this AI mania is a obvious bubble and the whole point is you have to prepare before it eventually pops, not after someone tells you that it is too late when it pops.
"Coding performed by AI is at a world-class level". Once I hit that line, I stopped reading. This tells me this person didn't do proper research on this matter.
I recently had ChatGPT refactor an entire mathematical graph rendering logic that I wrote in vanilla js, and had it rewrite it as GLSL. It took about an hour overall (required a few prompts). That is world-class level in my opinion.
If I tell people that I can write programming code at world-class level and in some of my reviews I make junior mistakes, I make out functions or dependencies that do not exist or I am unable to learn from my mistakes, I would be put on PIP immediately. And after a while, fired. This is the standard LLMs should be held up against when you use the word "world class".
(I know you have RSUs / shares / golden handcuffs waiting to be vested in the next 1 - 4 years which is why you want the bubble to continue to get bigger.)
But one certainty is the crash will be spectacular.
> In many advanced software teams, developers no longer write the code; they type in what they want, and AI systems generate the code for them.
What a wild and speculative claim. Is there any source for this information?
At $WORK, we have a bot that integrates with Slack that sets up minor PRs. Adjusting tf, updating endpoints, adding simple handlers. It does pretty well.
Also in a case of just prose to code, Claude wrote up a concurrent data migration utility in Go. When I reviewed it, it wasn't managing goroutines or waitgroups well, and the whole thing was a buggy mess and could not be gracefully killed. I would have written it faster by hand, no doubt. I think I know more now and the calculus may be shifting on my AI usage. However, the following day, my colleague needed a nearly identical temporary tool. A 45 minute session with Claude of "copy this thing but do this other stuff" easily saved them 6-8 hours of work. And again, that was just talking with Claude.
I am doing a hybrid approach really. I write much of my scaffolding, I write example code, I modify quick things the ai made to be more like I want, I set up guard rails and some tests then have the ai go to town. Results are mixed but trending up still.
FWIW, our CEO has declared us to be AI-first, so we are to leverage AI in everything we do which I think is misguided. But you can bet they will be reviewing AI usage metrics and lower wont be better at $WORK.
> it wasn't managing goroutines or waitgroups well, and the whole thing was a buggy mess and could not be gracefully killed
First pass on a greenfield project is often like that, for humans too I suppose. Once the MVP is up, refactor with Opus ultrathink to look for areas of weakness and improvement usually tightens things up.
Then as you pointed out, once you have solid scaffolding, examples, etc, things keep improving. I feel like Claude has a pretty strong bias for following existing patterns in the project.
The line right after this is much worse:
> Coding performed by AI is at a world-class level, something that wasn’t so just a year ago.
Wow, finance people certainly don't understand programming.
World class? Then what am I? I frequently work with Copilot and Claude Sonnet, and it can be useful, but trusting it to write code for anything moderately complicated is a bad idea. I am impressed by its ability to generate and analyse code, but its code almost never works the first time, unless it's trivial boilerplate stuff, and its analysis is wrong half the time.
It's very useful if you have the knowledge and experience to tell when it's wrong. That is the absolutely vital skill to work with these systems. In the right circumstances, they can work miracles in a very short time. But if they're wrong, they can easily waste hours or more following the wrong track.
It's fast, it's very well-read, and it's sometimes correct. That's my analysis of it.
Is this why AI is telling us our every idea is brilliant and great? Because their code doesn't stand up to what we can do?
Copilot is easily the worst (and probably slowest) coding agent. SOTA and Copilot don't even inhabit similar planes of existence.
> Including how it looks at the surrounding code and patterns.
Citation needed. Even with specific examples, “follow the patterns from the existing tests”, etc copilot (gpt 5) still insists on generating tests using the wrong methods (“describe” and “it” in a codebase that uses “suite” and “test”).
An intern, even an intern with a severe cognitive disability, would not be so bad at pattern following.
Do you think smart companies seeking to leverage AI effectively in their engineering orgs are using the 20$ slopify subscription from Microsoft?
You get what you pay for.
They don’t. I’ve gone from rickety and slow excel sheets and maybe some python functions to automate small things that I can figure out to building out entire data pipelines. It’s incredible how much more efficient we’ve gotten.
Ask ChatGPT “is AI programming world class?”
It's not. And if your team is doing this you're not "advanced."
Lots of people are outing themselves these days about the complexity of their jobs, or lack thereof.
Which is great! But it's not a +1 for AI, it's a -1 for them.
Part of the issue is that I think you are underestimating the number of people not doing "advanced" programming. If it's around ~80-90%, then that's a lot of +1s for AI
Why do you feel like I'm underestimating the # of people not doing advanced programming?
I've had claude code compose complex AWS infrastructure (using pulumi IAC) that mostly works from a one-shot prompt.
I'm on a team like that and I see it happening in more and more companies around. Maybe "many" does a heavy lifting in the quoted text, but it is definitely happening.
Probably their googly-eyed vibe coder friend told them this and they just parroted it.
Right. The author is non-technical and said so up front.
If by "code" you mean machine code, it's been true for decades now. Of course the instructions on how to auto-generate the machine code you want must be quite complex and highly precise if you want directly usable results, but the tech works very well. It's called Automatic Programming and is considered a subset of Artificial Intelligence (AI).
If true I’d like to know who is doing this so I can have exactly nothing to do with them.
I only write around 5% of the code I ship, maybe less. For some reason when I make this statement a lot of people sweep in to tell me I am an idiot or lying, but I really have no reason to lie (and I don't think I'm an idiot!). I have 10+ years of experience as an SWE, I work at a Series C startup in SF, and we do XXMM ARR. I do thoroughly audit all the code that AI writes, and often go through multiple iterations, so it's a bit of a more complex picture, but if you were to simply say "a developer is not writing the code", it would be an accurate statement.
Though I do think "advanced software team" is kind of an absurd phrase, and I don't think there is any correlation with how "advanced" the software you build is and how much you need AI. In fact, there's probably an anti-correlation: I think that I get such great use out of AI primarily because we don't need to write particularly difficult code, but we do need to write a lot of it. I spend a lot of time in React, which AI is very well-suited to.
EDIT: I'd love to hear from people who disagree with me or think I am off-base somehow about which particular part of my comment (or follow-up comment https://news.ycombinator.com/item?id=46222640) seems wrong. I'm particularly curious why when I say I use Rust and code faster everyone is fine with that, but saying that I use AI and code faster is an extremely contentious statement.
>I only write around 5% of the code I ship, maybe less.
>I do thoroughly audit all the code that AI writes, and often go through multiple iterations
Does this actually save you time versus writing most of the code yourself? In general, it's a lot harder to read and grok code than to write it [0, 1, 2, 3]. For me, one of the biggest skills for using AI to efficiently write code is a) chunking the task into increments that are both small enough for me to easily grok the AI-generated code and also aligned enough to the AI's training data for its output to be ~100% correct, b) correctly predicting ahead of time whether reviewing/correcting the output for each increment will take longer than just doing it myself, and c) ensuring that the overhead of a) and b) doesn't exceed just doing it myself.
[0] https://mattrickard.com/its-hard-to-read-code-than-write-it
[1] https://www.joelonsoftware.com/2000/04/06/things-you-should-...
[2] https://trishagee.com/presentations/reading_code/
[3] https://idiallo.com/blog/writing-code-is-easy-reading-is-har...
Yes, I save an incredible amount of time. I suspect I’m likely 5-10x more productive, though it depends exactly what I’m working on. Most of the issues that you cite can be solved, though it requires you to rewire the programming part of your brain to work with this new paradigm.
To be honest, I don’t really have a problem with chunking my tasks. The reason I don’t is because I don’t really think about it that way. I care a lot more about chunks and AI could reasonably validate. Instead of thinking “what’s the biggest chunk I could reasonably ask AI to solve” I think “what’s the biggest piece I could ask an AI to do that I can write a script to easily validate once it’s done?” Allowing the AI to validate its own work means you never have to worry about chunking again. (OK, that's a slight hyperbole, but the validation is most of my concern, and a secondary concern is that I try not to let it go for more than 1000 lines.)
For instance, take the example of an AI rewriting an API call to support a new db library you are migrating to. In this case, it’s easy to write a test case for the AI. Just run a bunch of cURLs on the existing endpoint that exercise the existing behavior (surely you already have these because you’re working in a code base that’s well tested, right? right?!?), and then make a script that verifies that the result of those cURLs has not changed. Now, instruct the AI to ensure it runs that script and doesn’t stop until the results are character for character identical. That will almost always get you something working.
Obviously the tactics change based on what you are working on. In frontend code, for example, I use a lot of Playwright. You get the idea.
As for code legibility, I tend to solve that by telling the AI to focus particularly on clean interfaces, and being OK with the internals of those interfaces be vibecoded and a little messy, so long as the external interface is crisp and well-tested. This is another very long discussion, and for the non-vibe-code-pilled (sorry), it probably sounds insane, and I feel it's easy to lose one's audience on such a polarizing topic, so I'll keep it brief. In short, one real key thing to understand about AI is that it makes the cost of writing unit tests and e2e tests drop significantly, and I find this (along with remaining disciplined and having crisp interfaces) to be an excellent tool in the fight against the increased code complexity that AI tools bring. So, in short, I deal with legibility by having a few really really clean interfaces/APIs that are extremely readable, and then testing them like crazy.
EDIT
There is a dead comment that I can't respond to that claims that I am not a reliable narrator because I have no A/B test. Behold, though: I am the AI-hater's nightmare, because I do have a good A/B test! I have a website that sees a decent amount of traffic (https://chipscompo.com/). Over the last few years, I have tried a few times to modernize and redesign the website, but these attempts have always failed because the website is pretty big (~50k loc) and I haven't been able to fit it in a single week of PTO.
This Thanksgiving, I took another crack at it with Claude Code, and not only did I finish an entire redesign (basically touched every line of frontend code), but I also got in a bunch of other new features, too, like a forgot password feature, and a suite of moderation tools. I then IaC'd the whole thing with Terraform, something I only dreamed about doing before AI! Then I bumped React a few majors versions, bumped TS about 10 years, etc, all with the help of AI. The new site is live and everyone seems to like it (well, they haven't left yet...).
If anything, this is actually an unfair comparison, because it was more work for the AI than it was for me when I tried a few years ago, because because my dependencies became more and more out of date as the years went on! This was actually a pain for AI, but I eventually managed to solve it.
Did you do 5-10 years of work in the year after you adopted AI? If you started after AI came in to existence 3 years ago (/s) you should have achieved 30 years of work output - a whole career of work.
I think AI only "got good" around the release of Claude Code + Opus 4.0, which was around March of this year. And it's not like I sit down and code 8 hours a day 5 days a week. I put on my pants one leg at a time -- there's a lot of other inefficiencies in the process, like meetings, alignment, etc, etc.
But yes, I do think that the efficiency gain, purely in the domain of coding, is around 5x, which is why I was able to entirely redesign my website in a week. When working on personal projects I don't need to worry about stakeholders at all.
a) is exactly what AI is good at. b) is a waste of time: why would you waste your precious time trying to predict a result when you can just get the result and see.
You are stuck in a very low local maximum.
You are me six months ago. You don’t know how it works, so you cannot yet reason about it. Unlike me, you’ve decided “all these other people who say it’s effective are making it up”. Instead ask, how does it work? What am I missing.
I regularly try to use various AI tools and I can imagine it is very easy for it to produce 95% of your code. I can also imagine you have 90% more code than you would have had you written it yourself. That’s not necessarily a bad thing, code is a means to an end, and if your business is happy with the outcomes, great, but I’m not sure percentages of code are particularly meaningful.
Every time I try to use AI it produces endless code that I would never have written. I’ve tried updating my instructions to use established dependencies when possible but it seems completely averse.
An argument could be made that a million lines isn’t a problem now that these machines can consume and keep all the context in memory — maybe machines producing concise code is asking for faster horses.
Everyone is doing this extreme pearl clutching around the specific wording. Yeah, it's not 100% accurate for many reasons, but the broader point was about employment effects, it doesn't need to completely replace every single developer to have a sizable impact. Sure, it's not there yet and it's not particularly close, but can you be certain that it will never be there?
Error bars, folks, use them.
AI writes most of the code for most new YC companies, as of this year.
I think this is is less significant b/c
1. Most of these companies are AI companies & would want to say that to promote whatever tool they're building
2. Selection b/c YC is looking to fund companies embracing AI
3. Building a greenfield project with AI to the quality of what you need to be a YC-backed company isn't particularly "world-class"
They’re not lying when they say they have AI write their code, so it’s not just promotion. They will thrive or die from this thesis. If present YC portfolio companies underperform the market in 5-10 years, that’s a strong signal for AI skeptics. If they overperform, that’s a strong signal that AI skeptics were wrong.
3. You are absolutely right. New startups have greenfield projects that are in-distribution for AI. This gives them faster iteration speed. This means new companies have a structural advantage over older companies, and I expect them to grow faster than tech startups that don’t do this.
Plenty of legacy codebases will stick around, for the same reasons they always do: once you’ve solved a problem, the worst thing you can do is rewrite your solution to a new architecture with a better devex. My prediction: if you want to keep the code writing and office culture of the 2010s, get a job internally at cloud computing companies (AWS, GCP, etc). High reliability systems have less to gain from iteration speed. That’s why airlines and banks maintain their mainframes.
So they don't own the copyright to most of their code? What's the value then?
They do. Where did you get this? All the providers have clauses like this:
"4.1. Generally. Customer and Customer’s End Users may provide Input and receive Output. As between Customer and OpenAI, to the extent permitted by applicable law, Customer: (a) retains all ownership rights in Input; and (b) owns all Output. OpenAI hereby assigns to Customer all OpenAI’s right, title, and interest, if any, in and to Output."
https://openai.com/policies/services-agreement/
The outputs of AI are most likely in the public domain. As automated process output are public domain, and the companies claim fair use when scraping, making the input unencumbered, too.
It wouldn't be OpenAI holding copyright - it would be no one holding copyright.
It's not automated. I guide it through prompting, at the least.
That explains the low quality of all launch HN this year
Stats/figures to backup the low quality claim?
If you have them, post them.
source: me
I wrote 4000 lines of Rust code with Codex - a high throughput websocket data collector.
Spoiler: I do not know Rust at all. I discussed possible architectures with GPT/Gemini/Grok (sync/async, data flow, storage options, ...), refined a design and then it was all implemented with agents.
Works perfectly, no bugs.
I would be interested in a web series (podcast or video) where people who do not know a language create something with AI. Then somebody with experience building in that technology reviews the code and gives feedback on it.
I am personally progressing to a point where I wonder if it even matters what the code looks like if it passes functional and unit tests. Do patterns matter if humans are not going to write and edit the code? Maybe sometimes. Maybe not other times.
I'm on a team like this currently. It's great when everyone knows how to use the tools and spot/kill slop and bad context. Generally speaking, good code gets merged and MUCH more quickly than in the past.
The post script was pretty sobering. It's kind of the first time in my life that I've been actively hoping for a technology to out right not deliver on its promise. This is a pretty depressing place to be, because most emerging technologies provide us with exciting new possibilities whereas this technology seems only exciting for management stressed about payroll.
It's true that the technology currently works as an excellent information gathering tool (which I am happy to be excited about) but that doesn't seem to be the promise at this point, the promise is about replacing human creativity with artificial creativity which.. is certainly new and unwelcome.
> It's kind of the first time in my life that I've been actively hoping for a technology to out right not deliver on its promise.
Same here, and I think it's because I feel like a craftsman. I thoroughly enjoy the process of thinking deeply about what I will build, breaking down the work into related chunks, and of course writing the code itself. It's like magic when it all comes together. Sometimes I can't even believe I get to do it!
I've spent over a decade learning an elegant language that allows me to instruct a computer—and the computer does exactly what I tell it. It's a miracle! I don't want to abandon this language. I don't want to describe things to the computer in English, then stare at a spinner for three minutes while the computer tries to churn out code.
I never knew there was an entire subclass of people in my field who don't want to write code.
I want to write code.
So write code.
Maybe post renaissance many artists no longer had patrons, but nothing was stopping them from painting.
If your industry truely is going in the direction where there's no paid work for you to code (which is unlikely in my opinion), nobody is stopping you. It's easier than ever, you have decades of personal computing at your fingertips.
Most people with a thing they love do it as a hobby, not a job. Maybe you've had it good for a long time?
I also love to code, though it's not what people pay to do anymore.
You should never hope for a technology to not deliver on its promise. Sooner or later it usually does. The question is, does it happen in two years or a hundred years? My motto: don't predict, prepare.
I'm quite ok with only writing code in my personal time. In fact, if I could solve the problems there faster, I'd be delighted.
Instead, I've reacted to the article from the opposite direction. All those grand claims about stuff this tech doesn't do and can't do. All that trying to validate the investment as rational when it's absolutely obvious it's at least 2 orders of magnitude larger than any arguably rational value.
It's been blowing my mind reading HN the past year or so and seeing so many comments from programmers that are excited to not have to write code. It's depressing.
LLM slop doesn't have aspirations at all, its just click bait nonsense.
https://www.youtube.com/watch?v=_zfN9wnPvU0
Drives people insane:
https://www.youtube.com/watch?v=yftBiNu0ZNU
And LLM are economically and technologically unsustainable:
https://www.youtube.com/watch?v=t-8TDOFqkQA
These have already proven it will be unconstrained if AGI ever emerges.
https://www.youtube.com/watch?v=Xx4Tpsk_fnM
The LLM bubble will pass, as it is already losing money with every new user. =3
He thinks "AI" "may be capable of taking over cognition", which shows he doesn't understand how LLM work...
Why is AI limited to just a raw LLM. Scaffolding, RL, multi-modal... so many techniques which can be applied. METR has shown AI's time horizon for staying on task is doubling every 7 months or less.
https://metr.org/blog/2025-07-14-how-does-time-horizon-vary-...
Because all the money has been going into LLMs and "inference machines" (what a non-descriptive name). So when an investor says "AI", that's what they mean.
A lot of the debate here swings between extremes. Claims like “AI writes most of the code now” are obviously exaggerated especially coming from a nontechnical author but acting like any use of AI is a red flag is just as unrealistic. Early stage teams do lean on LLMs for scaffolding, tests and boilerplate, but the hard engineering work is still human. Is there a bubble? Sure, valuations look frothy. But like the dotcom era, a correction doesn’t invalidate the underlying shift it just clears out the noise. The hype is inflated, the technology is real.
If you look at the chart at the bottom comparing Dec 99 to today....
> during the internet bubble of 1998-2000, the p/e ratios were much higher
That is true, the current players are more profitable, but the weight in SPX percentages looks to be much higher today.
I've enjoyed Howard Marks writing/thinking in the past, but this is clearly a person who thinks they understand the topic but doesn't have the slightest clue. Someone trying to be relevant/engaged before really thinking on what is fact vs. fiction.
As usual I don't take financial advice from Hacker News comments and do well.
Originally submitted here: https://news.ycombinator.com/item?id=46212259
Why is so much invested in AI but not in fusion power?
There is probably not any large market for fusion power as we conceive of it today.
You will get a different result if you revolutionize some related area (like making an extremely capably superconductor), or if you open up some market that can't use the cheapest alternatives (like deep space asteroid mining). But neither of those options can go together with "oh, and we will achieve energy positive fusion" in a startup business plan.
Probably because AI appears to work, more or less, and now it's just a race to make it better and to monetize it.
Before ChatGPT, I'd guess that the amounts of money poured in both of these things were about the same.
There are a lot of areas that could use more investment but aren't getting it. The way this works is complicated. The best explanation comes from really understanding Moore's Law. The main effect of the law was really about investment, about securing investment into semiconductor fabs rather than anywhere else.
See, every fab costs double what the previous generation did (current ones run roughly 20 gigadollars per factory). And you need to build a new fab every couple of years. But, if you can keep your order book full, you can make a profit on that fab- you can get good ROI on the investment and pay the money people back nicely. But you need to go to the markets to raise money for that next generation fab because it costs twice what your previous generation did and you didn't get that much free cash from your previous generation. And the money men wouldn't want to give it to you, of course. But thanks to Moore's Law you can pitch it as inevitable, if you don't borrow the money to build the new fab, then your competitors will. And so they would give you the money for the new fab because it says right on this paper that in another two years the transistors will double.
Right now, that "it's inevitable, our competitors will get there if we don't" argument works on VCs if you are pitching LLM's or LLM based things. And it doesn't work as well if you are pitching battery technology, fusion power, or other areas. And that's why the investments are going to AI.
Because wind, solar and battery tech have given us most of the benefits of fusion power and it actually works today.
Look for the quote "coding is at a world class level"...
Impressive that you have have that many assets under management and still not show a clear understanding of an industry you're prognosticating on. The author doesn't talk at all about the hardware aspect of this stuff such as the surprisingly short lifetime of the GPUs that are being rolled out at a break-neck pace. The recommendation that you take a moderate investment position and not overdo it could be shared without as much needless thinking out loud, and doesn't bring anything new to the conversation. Kind of like every other AI offering out there, if you think about it -- participating in something you don't understand because of FOMO.
> The author doesn't talk at all about the hardware aspect of this stuff such as the surprisingly short lifetime of the GPUs that are being rolled out at a break-neck pace.
There is literally a section that begins, "What will be the useful life of AI assets?" In bold.
> The author doesn't talk at all about the hardware aspect of this stuff such as the surprisingly short lifetime of the GPUs that are being rolled out at a break-neck pace.
I am not sure why that is interesting. Nobody thinks of these chips as long term assets that they are investing in. Cloud providers have always amortized their computers over ~5 years. It would be very surprising if AI companies were doing much different -- maybe even a shorter time line.
This thread is just full of people discussing why industrial looms are bad. The factory owners don’t think looms are bad. You can either learn how to be useful in the new factory or you can start throwing shoes.
I think this gives an excellent framework for how to think of this. Is it a bubble? Who knows is a perfectly valid answer.
I do think there’s something quite ironic that one of the frequent criticisms of LLMs are that they can’t really say “I don’t know”. Yet if someone says that they get criticised. No surprises that our tools are the same.
Is it "work"?
Off--topic: how many get overpaid for absolute bullshit?
The problem is that people conflate the current wave of transformer based ANNs with AI (as a whole). AI certainly has the potential to disrupt employment of humans. Transformers as they exist today not so much.
AI's potential isn't defined by the potential of the current crop of transformers. However, many people seem to think otherwise and this will be incredibly damaging for AI as a whole once transformer tech investment all but dries out.
It's a recurring phenomena, c.f. "AI winter" and the cycle before and after.
We're too easily fooled by our mistaken models of the problem, it's difficulty, and what constitutes progress, so are perpetually fooled by the latest, greatest "ladder to the moon" effort.
Ai is currently a bubble. But that is just a short term phenomenon. Ultimately what AI currently is and what the trend-line indicates what AI will become will change the economy in ways that will dwarf the current bubble.
But this is only if the trend-line keeps going, which is a likely possibility given the last couple of years.
I think people are making the mistake that AI is a bubble and therefore AI is completely bullshit. Remember: The internet was a bubble. It ended up changing world.
Yes, a bubble just means that it's over-valued and that at some point there will be a significant correction in stock values. It doesn't mean that the thing is inherently worthless.
A great example is the DotCom bubble. Wiped out a lot of capital but it really did transform the world.
But also, a lot of the dot com companies that people invested in in 1999 went bust, meaning those specific investments went to zero even if the web as a whole was a huge success financially.
Sure...that's why it's important to diversify investments. For every Pets.com, hopefully you have a Google in your portfolio.
Or, you skip all that and just put it all in an S&P 500 fund.
I started working in 1997 and lived through the dot com bubble and collapse. My advice to people is to diversify away from your company stock. I knew a lot of people at Cisco that had stock options at $80 and it dropped to under $20.
Because of the way the AMT (Alternative Minimum Tax) worked at the time they bought the stock, did not sell, but owed taxes on the gain on the day of purchase. They had tax bills of over $1 million but even if they sold it all they couldn't pay the bill. This dragged on for years.
Google said the dotcom bubble is roughly from 1995 to 2001. That's about 6 years. ChatGPT was released in 2022. Claude AI was released in 2023. DeepSeek was released in 2023.
Let's just say the AI bubble started in 2023. We still have about 3 years, more or less, until the AI bubble pops.
I do believe we are in the build out phase of the AI bubble, much like the dotcom bubble, where Cisco routers, Sun Microsystems servers... etc. sold like hotcakes to build up the foundation of the dotcom bubble
> Let's just say the AI bubble started in 2023. We still have about 3 years, more or less, until the AI bubble pops.
Minimum 3 years and at a hard maximum of 6 years from now.
We'll see lots of so called AI companies fold and there will be a select few winners that stay on.
So I'd give my crash timelines at around 2029 to 2031 for a significant correction turned crash.
>I find the resulting outlook for employment terrifying. I am enormously concerned about what will happen to the people whose jobs AI renders unnecessary, or who can’t find jobs because of it. The optimists argue that “new jobs have always materialized after past technological advances.” I hope that’ll hold true in the case of AI, but hope isn’t much to hang one’s hat on, and I have trouble figuring out where those jobs will come from. Of course, I’m not much of a futurist or a financial optimist, and that’s why it’s a good thing I shifted from equities to bonds in 1978.
It's no wonder that the "AI optimists", unless very tendentious, try to focus more on "not needing to work because you'll get free stuff" rather than "you'll be able to exchange your labor for goods".
And i am so buying the vision of Elon using AI to give me free stuff. He just gives off this enormous altruistic energy.
Of course it's a bubble. Valuations are propped up by speculative spending and AI seems unable to make enough profit to make back the continued spending.
Now, that's not to say AI isn't useful and we won't have AGI in the future. But this feels alot like the AI winter. Valuations will crash, a bunch of players will disappear, but we'll keep using the tech for boring things and eventually we'll have another breakthrough.
This is one of the few times I think Betteridge's law is wrong.
Yep. But be careful, somebody doesn't like when you say that.
https://news.ycombinator.com/context?id=46136301
A take I saw recently is: if people are still asking "are we in a bubble" then we are not yet in a bubble.
I think it'd be truer to say that you can't be sure it's a bubble until after it pops.
That is like telling others who experience natural disasters to wait until it happens and then ask themselves "How much damage will it bring" and then someone else tells them that it costed them everything.
Anyone who has lived through the dotcom bubble knows that this AI mania is a obvious bubble and the whole point is you have to prepare before it eventually pops, not after someone tells you that it is too late when it pops.
You don't prepare by making predictions about when it will pop, you prepare by hedging etc.
Just as those who live in earthquake-prone areas build earthquake-resistant buildings.
"Coding performed by AI is at a world-class level". Once I hit that line, I stopped reading. This tells me this person didn't do proper research on this matter.
I recently had ChatGPT refactor an entire mathematical graph rendering logic that I wrote in vanilla js, and had it rewrite it as GLSL. It took about an hour overall (required a few prompts). That is world-class level in my opinion.
If I tell people that I can write programming code at world-class level and in some of my reviews I make junior mistakes, I make out functions or dependencies that do not exist or I am unable to learn from my mistakes, I would be put on PIP immediately. And after a while, fired. This is the standard LLMs should be held up against when you use the word "world class".
TLDR:
Yes.
Sorry to pop your bubble...
...it is a bubble and we all know it.
(I know you have RSUs / shares / golden handcuffs waiting to be vested in the next 1 - 4 years which is why you want the bubble to continue to get bigger.)
But one certainty is the crash will be spectacular.
Everyday someone says/asks this statement/question. The "(Is) AI (is) a bubble" statement/question is now a bubble.