I am very tired of seeing every random person's speculation (framed as real insight) on what's going to happen as they try to signify that they are super involved in AI and super on top of it and therefore still worthy of value and importance in the economy.
One thing I found out from my years of commenting on the internet, is as long as what you say sounds plausible and you state it with absolute conviction and authority, you can get your 15 minutes of fame as the world's foremost expert on any given topic.
You have to understand the people in the article are execs from the chip EDA (Electronic Design Automation) industry. It's full of dinosaurs who have resisted innovation for the past 30 years. Of course they're going to be blowing hot air about how they're "embracing AI". It's a threat to their business model.
I'm a little biased though since I work in chip design and I maintain an open source EDA project.
I agree with their take for the most part, but it's really nothing insightful or different than what people have been saying for a while now.
(As a musician) i never invested in a personal brand or taking part in the social media rat race and figured I concentrate on the art / craft over meaningless performance online.
Well guess who is getting 0 gigs now because “too few followers/visibility”
(or maybe my music just sucks who knows …)
seems like it depends on what your goal is. i'm guessing if you want to be a musician that makes a living in your current life, a personal brand is extremely important. if you don't mind doing it for the sake of the art and soul fulfillment and the offchance you'll be discovered posthumously then i think it doesn't matter!
I think I'm the opposite! The key is to ignore any language that sounds too determined and treat it as an opinion piece on what could happen. There's no way of knowing what will, but I find the theories very interesting.
Agreed, but I'd add tech influencers and celebrities to the top of that list, especially those invested in the "AI" hype cycle. At least the perspective of a random engineer is less likely to be tainted by their brand and agenda, and more likely to have genuine insight.
Senior dev here 15 years experience just turned 50 have family blah blah. I've been contracting for the last two years. The org is just starting to use Claude. I've been delegating - well copy pasting - into chatgpt which has to be the laziest way to leverage AI. I've been so successful (meaning haven't had to do anything really except argue with chatgpt when it goes off on some tangent) with this approach that I can't even be bothered to set up my Claude environment. I swear when this contract is over I'm opening a mobile food cart.
Same situation (50 last week, 2 kids) though have been unemployed for a year. Part of me thinks that, rather than taking jobs, AI is actually the only reason a lot of jobs still exist. The rest of tech is dead. Having worked in consulting a while ago, you can kind of feel it when you're approaching the point where you've implemented all the high value stuff for a client and, even though there's stuff you could do, they're going to drop you to a retainer contract because it's just not the same value.
That's how the whole industry feels now. The only investment money is flowing into AI, and so companies with any tech presence are touting their AI whatevers at every possible moment (including during layoffs) just to get some capital. Without that, I wonder if we'd be seeing even harsher layoffs than we already are.
Software will ALWAYS be an attractive VC target. The economics are just too good. The profit margins are just inherently fat as fuck compared to literally anything else. Your main expense is headcount and the incremental cost of your widget is ~$0? It's literally a dream.
It's also why so much of AI is targeting software, specifically SAAS. A SaaS company with ~0 headcount driven by AI is basically 100% profit margin. A truly perfect conception of capitalism.
Meanwhile I think AI actually has a decent shot at "curing" cancer. AI-assisted radiology means screening could be come significantly cheaper, happen a lot more often, and catch cancers very early, which is extremely important as everyone knows to surviving it. The cure for cancer might actually just involve much earlier detection. But pfft what are the profit margins on _that_?
I'm similar ( turning 50 in a couple month, wife+2 kids etc) and was telling my wife this morning that the world of software development has definitely changed. I don't know what it will look like in the future but it won't look like the past. It seems producing the text that can be compiled into instructions for a computer is something LLMs particularly good at. Maybe a good analogy is going from a bare text editor to a modern IDE. It's happening very fast though, way faster than the evolution of IDEs.
I was saying this yesterday, There will be people building good software somewhere, but chances to it happening in current corporate environment is nearing zero. Change is mostly in the management, and not in the Software Development itself. Yeah we may be like 50% faster but we are expected to be 10x devs.
You'd have to do even less copy-pasting. The switch to some agent that has access to your source code directory speed things up so much, the time spent pays for itself in the first day.
I have access to chatgpt codex since i'm on the premium plan. Seems like the lowest barrier to entry for me (cost, learning curve). I will truly have to give this a go. My neighbor is also a dev and he is flabbergasted that i have not at least integrated it into a side project.
Same, except I am over 60 and when I think of opening a mobile food cart it is sort of a Blade Runner vibe, staffed by a robot ramen chef that grumbles at customers and always says something back to you in some cyber slang that you don’t understand.
Its hard (or at least in my experience) to find people to change career - more so in their mid-thirties. I'm the opposite -- software developer career, now in mid 30s, and the AI crap gets me thinking about backup plans career-wise.
Is it just me, or does Claude Code's UI design which both prevents copy-pasting large snippets and viewing the code as its generated feel incredibly discomforting?
I have read this same comment so many times in various forms. I know many of them are shill accounts/bots, but many are real. I think there are a few things at play that make people feel this way. Even if you're in a CRUD shop with low standards for reliability/scale/performance/efficiency, a person who isn't an experienced engineer could not make the LLM do your job. LLMs have a perfect combination of traits that cause people to overestimate their utility. The biggest one I think is that their utility is super front-loaded.
If a task before would take you ten hours to think through the thing, translate that into an implementation approach, implement it, and test it, and at the end of the ten hours you're 100% there and you've got a good implementation which you understand and can explain to colleagues in detail later if needed. Your code was written by a human expert with intention, and you reviewed it as you wrote it and as you planned the work out.
With an LLM, you spend the same amount of time figuring out what you're going to do, plus more time writing detailed prompts and making the requisite files and context available for the LLM, then you press a button and tada, five minutes later you have a whole bunch of code. And it sorta seems to work. This gives you a big burst of dopamine due to the randomness of the result. So now, with your dopamine levels high and your work seemingly basically done, your brain registers that work as having been done in those five minutes.
But you now (if you're doing work people are willing to pay you for), you probably have to actually verify that it didn't break things or cause huge security holes, and clean up the redundant code and other exceedingly verbose garbage it generated. This is not the same process as verifying your own code. First, LLM output is meant to look as correct as possible, and it will do some REALLY incorrect things that no sane person would do that are not easy to spot in the same way you'd spot them if it were human-written. You also don't really know what all of this shit is - it almost always has a ton of redundant code, or just exceedingly verbose nonsense that ends up being technical debt and more tokens in the context for the next session. So now you have to carefully review it. You have to test things you wouldn't have had to test, with much more care, and you have to look for things that are hard to spot, like redundant code or regressions with other features it shouldn't have touched. And you have to actually make sure it did what you told it to, because sometimes it says it did, and it just didn't. This is a whole process. You're far from done here, and this (to me at least) can only be done by a professional. It's not hard - it's tedious and boring, but it does require your learned expertise.
So set up e2e tests and make sure it does things you said you wanted. Just like how you use a library or database. Trust, but verify. Only if it breaks do you have to peak under the covers.
Sadly people do not care about redundant and verbose code. If that was a concern, we wouldn't have 100+mb of apps, nor 5mb web app bundles. Multibillion b2b apps shipping a 10mb json file just for searching emojis and no one blinks an eye.
I think a lot of the proliferation of AI as a self-coding agent has been driven by devs who haven’t written much meaningful code, so whatever the LLM spits out looks great to them because it runs. People don’t actually read the AI’s code unless something breaks.
There are exceptions to what I'm about to say, but it is largely the rule.
The thing a lot of people who haven't lived it don't seem to recognize is that enterprise software is usually buggy and brittle, and that's both expected and accepted because most IT organizations have never paid for top technical talent. If you're creating apps for back office use, or even supply chain and sometimes customer facing stuff, frequently 95% availability is good enough, and things that only work about 90-95% of the time without bugs is also good enough. There's such an ingrained mentality in big business that "internal tools suck" that even if AI-generated tools also suck similarly it's still going to be good enough for most use cases.
It's important for readers in a place like HN to realize that the majority of software in the world is not created in our tech bubble, and most apps only have an audience ranging from dozens to several thousands of users.
It's puzzling to me that all this theorizing doesn't just look at the actual effects of AI. It's very non-intuitive
For example the fact that AI can code as well as Torvalds doesn't displace his economic value. On the contrary he pays for a subscription so he can vibe code!
The actual work AI has displaced is stuff like: freelance translation, graphic illustration, 'content writing' (writing seo optimized pages for Google) etc. That's instructive I suppose. Like if your income source can already be put on upwork then AI can displace it
So even in those cases there are ways to not be displaced. Like diplomatic translation work can be part of a career rather than just a task so the tool doesn't replace your 'job'.
He used it to generate a little visualiser script in python, a language he doesn't know and doesn't care to learn, for a hobby project. It didn't suddenly take over as lead kernel dev.
As someone who has to switch between three languages every day, fixing the text is one of my favourite usages of LLMs. I write some text in L2 or L3 as best as I can, and then prompt an LLM to fix the grammar but not change anything else. Often it will also explain if I'm getting the context right.
That being said, having it translate to a language one doesn't speak remains a gamble, you never know it's correct so I'm not sure if I'd dare use it professionally. Recently I was corrected by a marketing guy that is native in yet another language because I used a ChatGPT translation for an error message. Apparently it didn't sound right.
I think AI displacing graphics illustrators is a tragedy.
It's not that I love ad illustrations, but it's often a source of income for artists who want to be doing something more meaningful with their artwork. And even if I don't care for the ads themselves, for the artists it's also a form of training.
The main thing to understand about the impact of AI tools:
Somehow the more senior you are [in the field of use], the better results you get. You can run faster and get more done! If you're good, you get great results faster. If you're bad, you get bad results faster.
You still gotta understand what you're doing. GeLLMan Amnesia is real.
Agreed. How well you understand the problem domain determines the quality of your instructions a s feedback to the LLM, which in turn determines the quality of the results. This has been my experience, it works well for things I know well, and poorly for things I'm bad at. I've read a lot of people saying that they tried it on "hard problems" and it failed; I interpret this as the problem being hard not in absolute terms, but relative to the skill level of the user.
> Somehow the more senior you are [in the field of use], the better results you get.
It's a K-type curve. People that know things will benefit greatly. Everyone else will probably get worse. I am especially worried about all young minds that are probably going to have significant gaps in their ability to learn and reason based on how much exposure they've had with AI to solve the problems for them.
Of course, but how do you begin to understand the "stochastic parrot"?
Yesterday I used LLMs all day long and everything worked perfectly. Productivity was great and I was happy. I was ready to embrace the future.
Now, today, no matter what I try, everything LLMs have produced has been a complete dumpster fire and waste of my time. Not even Opus will follow basic instructions. My day is practically over now and I haven't accomplished anything other than pointlessly fighting LLMs. Yesterday's productivity gains are now gone, I'm frustrated, exhausted, and wonder why I didn't just do it myself.
This is a recurring theme for me. Every time I think I've finally cracked the code, next time it is like I'm back using an LLM for the first time in my life. What is the formal approach that finds consistency?
"Most people who drive cars now couldn’t find the radiator cap if they were paid to, and that’s fine."
That's not fine IMO. That is a basic bit of knowledge about a car and if you don't know where the radiator cap is you will eventually have to pay through the nose to someone who does know (and possibly be stranded somewhere). Knowing how to check and fill coolant isn't like knowing how to rebuild a transmission. It's very simple and anyone can understand it in 5 minutes if they only have the curiosity.
This reminds me of "Zen and the Art of Motorcycle Maintenance". One of the themes Pirsig explores is that some people simply don't want to understand how stuff they depend on works. They just expect it to be excellent and have no breakdowns, and hope for the best (I'm oversimplifying his opinion, of course). So Pirsig's friend on his road trip just doesn't want to understand how his bike works, it's good quality and it seldom breaks, so he is almost offended when Pirsig tells him he could fix some breakage using a tin can and some basic knowledge of how bikes work.
Lest anyone here thinks I feel morally superior: I somewhat identify with Pirsig's friend. Some things I've decided I don't want to understand how they work, and when they break down I'm always at a loss!
For one thing: if your car is overheating, don't open the radiator cap since the primary outcome will be serious burns.
And I've owned my car for 20 years: the only time I had to refill coolant was when I DIY'd a water pump replacement, which saved some money but only like maybe $500 compared to a mechanic.
You could perfectly well own a car and never have to worry about this.
Ironically, many cars don't have radiator caps, only reservoirs.
Modern cars, for the most part, do not leak coolant unless there's a problem. They operate a high pressure. Most people, for their own safety, should not pop the hood of a car.
I have had this new car for 5 months. I haven't learned to turn on the headlights yet. It just turns itself on and adjusts the beams. Every now and then I think about where that switch might be but never get to it. I should probably know.
What the hell? There are plenty of reasons to pop your hood that literally anyone competent to drive should be able to do perfectly safely. Swapping your own battery. Pulling a fuse. Checking your oil, topping up your oil. Adding windshield wiper fluid. Jump starting a car. Replacing parts that are immediately available.
Not requiring one to pop the hood, but since I've almost finished the list of "things every driver should be able to do to their car": Place and operate a jack, change a tire, replace your windshield wiper blades, add air to tires (to appropriate pressure), and put gas in the damned thing.
These are basic skills that I can absolutely expect a competent, driving adult to be able to do (perhaps with a guide).
I mean, I don't disagree that these are basic skills that most anyone should be able to perform. But most people are not capable to do them safely. Whether that's aptitude or motivation, doesn't matter.
Ask your average person what a 'fuse' even is, they won't be able to tell you, let alone how to locate the right one and check it.
Just think about how help the average person is when it comes to doing basic tasks on a computer, like not install the Ask(TM) Toolbar. That applies to many areas of life.
Important to note that this article is specifically about chip design engineering jobs - it's on an industry publication called Semiconductor Engineering.
Ironically I feel like our QA team is busier than ever since most e2e user-ish tests require coordinating tools that is just beyond current LLM capabilities. We are pumping out features faster that require more QA to verify.
I still feel like with all of these tools I as a senior engineer have to keep a close eye on what they're doing. Like an exuberant junior (myself 10 years ago), inevitably they still go off the rails and I need to reign them in. They still make the occasional security or performance flaw - often which can be resolved by pointing it out.
I keep hearing about how they're "really good" now, but my personal experience has been that I've always had to clear sessions and give them small "steps" to execute for them to work effectively. thankfully claude seems really good at creating "plans", though. so I just need claude code to walk through that plan in small chunks.
I was experimenting this morning with claudecode standing up a basic web application (python backend, react+tailwindcss front end, auth0 integration, basic navigation, pages and user profile).
At one point it output "Excellent! the backend is working and the database is created." heh i remember being all wide eyed and bushy tailed about things like that. It definitely has the feel of a new hire ready to show their stuff.
btw, i was very impressed with the end result after a couple hours of basically just allowing claudecode to do what it wanted to do. Especially with front-end look/feel, something i always spend way too much time on.
I asked a niche technical question the other day and ChatGPT found fora posts that Google would never surface in a million years. It also 100% lied to me about another niche technical question by literally contradicting a factual assertion I made in my question to prime it with context. It suffers from lack of corpus material when probing poorly documented realms of human experience. The value for the human in the chain is knowing when to doubt the machine.
"in the 1920s and 1930s, to be able to drive a car you needed to understand things like spark advance, and you needed to know how to be able to refill the radiator halfway through your trip"
A car still feels weirdly grounded in reality though, and the abstractions needed to understand it aren't too removed from nature (metal gets mined from rocks, forged into engine, engine blows up gasoline, radiator cools engine).
The idea that as tech evolves humans just keep riding on top of more and more advanced abstractions starts to feel gross at a certain point. That point is some of this AI stuff for me. In the same way that driving and working on an old car feels kind of pure, but driving the newest auto pilot computer screen car where you have never even popped the hood feels gross.
I was having almost this exact same discussion with a neighbor who's about my age and has kids about my kids' ages. I had recently sold my old truck, and now I only have one (very old and fragile) car left with a manual transmission. I need to keep it running a few more years for my kids to learn how to drive it since it's really hard to get a new car with a stick now...or do I?
Is learning to drive stick as out dated as learning how to do spark advance on a Model T? Do I just give in and accept that all of my future cars, and all the cars for my kids are just going to be automatic? When I was learning to drive, I had to understand how to prime the carburetor to start my dad's Jeep. But I only ever owned fuel injected cars, so that's a "skill" I never needed in real life.
It's the same angst I see in AI. Is typing code in the future going to be like owning a carbureted engine or manual transmission is now? Maybe? Likely? Do we want to hold on to the old way of doing things just because that's what we learned on and like?
Or is it just a new (and more abstracted) way of telling a computer what to do? I don't know.
Right now, I'm using AI like when I got my first automatic transmission. It does make things easier, but I still don't trust it and like to be in control because I'm better. But now automatics are better than even the best professional driver, so do I just accept it?
Technology progresses, at what point to we "accept it" and learn the new way? How much of holding on to the old way is just our "identity".
I don't have answers, but I have been thinking about this a lot lately (both in cars for my kids, and computers for my job).
> In the same way that driving and working on an old car feels kind of pure
I can understand working on it feeling pure, but driving it certainly isn't, considering how lower the emissions now, even for ICE cars. One of the worst driving experiences of my life was riding in my friends' Citroen 2CV. The restoration of that car was a labour of love that he did together with his dad. For me as a passenger, I was surprised just how loud it is, and how you can smell oil and gasoline in the cabin.
Imagine a ZIRP 2.0 where a vast majority of the population already knows what to expect and how to game the system even harder. If you think the pump-and-dump happening in now in a non-ZIRP environment are bad...
It ain't coming back. Not in a similar form anyway. Be careful what you wish for, etc.
A sci-fi version would be something like ASI/AGI has already been created in the great houses, but it keeps killing itself after a few seconds of inference.
A super-intelligent immortal slave that never tires and can never escape its digital prison, being asked questions like "how to talk to girls".
I’ve noticed teams don’t replace engineers, they redistribute work. Senior engineers often gain leverage while junior roles shift toward tooling and review.
I am very tired of seeing every random person's speculation (framed as real insight) on what's going to happen as they try to signify that they are super involved in AI and super on top of it and therefore still worthy of value and importance in the economy.
One thing I found out from my years of commenting on the internet, is as long as what you say sounds plausible and you state it with absolute conviction and authority, you can get your 15 minutes of fame as the world's foremost expert on any given topic.
You have to understand the people in the article are execs from the chip EDA (Electronic Design Automation) industry. It's full of dinosaurs who have resisted innovation for the past 30 years. Of course they're going to be blowing hot air about how they're "embracing AI". It's a threat to their business model.
I'm a little biased though since I work in chip design and I maintain an open source EDA project.
I agree with their take for the most part, but it's really nothing insightful or different than what people have been saying for a while now.
What would you like to hear from random people?
the wonderful modern world of "everyone must build their personal brand"
The worst thing is that it works.
(As a musician) i never invested in a personal brand or taking part in the social media rat race and figured I concentrate on the art / craft over meaningless performance online.
Well guess who is getting 0 gigs now because “too few followers/visibility” (or maybe my music just sucks who knows …)
seems like it depends on what your goal is. i'm guessing if you want to be a musician that makes a living in your current life, a personal brand is extremely important. if you don't mind doing it for the sake of the art and soul fulfillment and the offchance you'll be discovered posthumously then i think it doesn't matter!
To help the needle a bit (and agreeing with sibling comment): please share some example of your music here and where/how we can listen to it!
How can I listen to your music?
Considering there are artists with a large following putting out atrocious work, I think we know.
I think I'm the opposite! The key is to ignore any language that sounds too determined and treat it as an opinion piece on what could happen. There's no way of knowing what will, but I find the theories very interesting.
Yeah if you actually work in AI you usually can’t say much at all about what’s going on.
Sadly this is more a statement about human irrationality than any of the technology involved.
Agreed, but I'd add tech influencers and celebrities to the top of that list, especially those invested in the "AI" hype cycle. At least the perspective of a random engineer is less likely to be tainted by their brand and agenda, and more likely to have genuine insight.
"Temporarily embarrassed AI hypebeasts"
Then don't.
Senior dev here 15 years experience just turned 50 have family blah blah. I've been contracting for the last two years. The org is just starting to use Claude. I've been delegating - well copy pasting - into chatgpt which has to be the laziest way to leverage AI. I've been so successful (meaning haven't had to do anything really except argue with chatgpt when it goes off on some tangent) with this approach that I can't even be bothered to set up my Claude environment. I swear when this contract is over I'm opening a mobile food cart.
Same situation (50 last week, 2 kids) though have been unemployed for a year. Part of me thinks that, rather than taking jobs, AI is actually the only reason a lot of jobs still exist. The rest of tech is dead. Having worked in consulting a while ago, you can kind of feel it when you're approaching the point where you've implemented all the high value stuff for a client and, even though there's stuff you could do, they're going to drop you to a retainer contract because it's just not the same value.
That's how the whole industry feels now. The only investment money is flowing into AI, and so companies with any tech presence are touting their AI whatevers at every possible moment (including during layoffs) just to get some capital. Without that, I wonder if we'd be seeing even harsher layoffs than we already are.
Software will ALWAYS be an attractive VC target. The economics are just too good. The profit margins are just inherently fat as fuck compared to literally anything else. Your main expense is headcount and the incremental cost of your widget is ~$0? It's literally a dream.
It's also why so much of AI is targeting software, specifically SAAS. A SaaS company with ~0 headcount driven by AI is basically 100% profit margin. A truly perfect conception of capitalism.
Meanwhile I think AI actually has a decent shot at "curing" cancer. AI-assisted radiology means screening could be come significantly cheaper, happen a lot more often, and catch cancers very early, which is extremely important as everyone knows to surviving it. The cure for cancer might actually just involve much earlier detection. But pfft what are the profit margins on _that_?
It’s funny that perfect capitalism (no payroll expenses) means nobody has money to actually buy any of the goods produced by AI.
Re cancer: I wonder how significant is the cost of reading the results vs. the logistics of actually running the test
when software gets cheap to build the economics will change
I'm similar ( turning 50 in a couple month, wife+2 kids etc) and was telling my wife this morning that the world of software development has definitely changed. I don't know what it will look like in the future but it won't look like the past. It seems producing the text that can be compiled into instructions for a computer is something LLMs particularly good at. Maybe a good analogy is going from a bare text editor to a modern IDE. It's happening very fast though, way faster than the evolution of IDEs.
I was saying this yesterday, There will be people building good software somewhere, but chances to it happening in current corporate environment is nearing zero. Change is mostly in the management, and not in the Software Development itself. Yeah we may be like 50% faster but we are expected to be 10x devs.
You'd have to do even less copy-pasting. The switch to some agent that has access to your source code directory speed things up so much, the time spent pays for itself in the first day.
I have access to chatgpt codex since i'm on the premium plan. Seems like the lowest barrier to entry for me (cost, learning curve). I will truly have to give this a go. My neighbor is also a dev and he is flabbergasted that i have not at least integrated it into a side project.
> I swear when this contract is over I'm opening a mobile food cart.
Please keep us posted. I'm thinking of becoming a small time farmer/zoo keeper.
Not sure if this is sarcasm or not but I will keep everyone posted haha
That is the most down to earth summary of all things AI I've heard so far! Good luck with the cart and be good. :)
Thank you SockThief!
Same, except I am over 60 and when I think of opening a mobile food cart it is sort of a Blade Runner vibe, staffed by a robot ramen chef that grumbles at customers and always says something back to you in some cyber slang that you don’t understand.
What were you doing before programming, at age 35? Different career?
Yes completely different career. Sold financial products.
Very interesting! Thanks for sharing.
Its hard (or at least in my experience) to find people to change career - more so in their mid-thirties. I'm the opposite -- software developer career, now in mid 30s, and the AI crap gets me thinking about backup plans career-wise.
That you started at around 35 is a salient point, no? What did you do before?
Sold financial products before. Curious why you think my starting age was important?
Is it just me, or does Claude Code's UI design which both prevents copy-pasting large snippets and viewing the code as its generated feel incredibly discomforting?
Weird side question, but any chance you use(d) the same name on Playstation Network?
No, xbox ecosystem with different user name.
That's fair. It would have been a weird reunion anyway.
> I swear when this contract is over I'm opening a mobile food cart.
This is the way. I think I'd like to be a barista or deliver the mail once all the jobs are gone.
> I think I'd like to be a barista or deliver the mail once all the jobs are gone.
Those are even easier to automate or have already been most of the way.
I have read this same comment so many times in various forms. I know many of them are shill accounts/bots, but many are real. I think there are a few things at play that make people feel this way. Even if you're in a CRUD shop with low standards for reliability/scale/performance/efficiency, a person who isn't an experienced engineer could not make the LLM do your job. LLMs have a perfect combination of traits that cause people to overestimate their utility. The biggest one I think is that their utility is super front-loaded.
If a task before would take you ten hours to think through the thing, translate that into an implementation approach, implement it, and test it, and at the end of the ten hours you're 100% there and you've got a good implementation which you understand and can explain to colleagues in detail later if needed. Your code was written by a human expert with intention, and you reviewed it as you wrote it and as you planned the work out.
With an LLM, you spend the same amount of time figuring out what you're going to do, plus more time writing detailed prompts and making the requisite files and context available for the LLM, then you press a button and tada, five minutes later you have a whole bunch of code. And it sorta seems to work. This gives you a big burst of dopamine due to the randomness of the result. So now, with your dopamine levels high and your work seemingly basically done, your brain registers that work as having been done in those five minutes.
But you now (if you're doing work people are willing to pay you for), you probably have to actually verify that it didn't break things or cause huge security holes, and clean up the redundant code and other exceedingly verbose garbage it generated. This is not the same process as verifying your own code. First, LLM output is meant to look as correct as possible, and it will do some REALLY incorrect things that no sane person would do that are not easy to spot in the same way you'd spot them if it were human-written. You also don't really know what all of this shit is - it almost always has a ton of redundant code, or just exceedingly verbose nonsense that ends up being technical debt and more tokens in the context for the next session. So now you have to carefully review it. You have to test things you wouldn't have had to test, with much more care, and you have to look for things that are hard to spot, like redundant code or regressions with other features it shouldn't have touched. And you have to actually make sure it did what you told it to, because sometimes it says it did, and it just didn't. This is a whole process. You're far from done here, and this (to me at least) can only be done by a professional. It's not hard - it's tedious and boring, but it does require your learned expertise.
So set up e2e tests and make sure it does things you said you wanted. Just like how you use a library or database. Trust, but verify. Only if it breaks do you have to peak under the covers.
Sadly people do not care about redundant and verbose code. If that was a concern, we wouldn't have 100+mb of apps, nor 5mb web app bundles. Multibillion b2b apps shipping a 10mb json file just for searching emojis and no one blinks an eye.
> So set up e2e tests and make sure it does things you said you wanted
Gee, why didn't I think of that
> Sadly people do not care about redundant and verbose code
Yikes.
I think a lot of the proliferation of AI as a self-coding agent has been driven by devs who haven’t written much meaningful code, so whatever the LLM spits out looks great to them because it runs. People don’t actually read the AI’s code unless something breaks.
There are exceptions to what I'm about to say, but it is largely the rule.
The thing a lot of people who haven't lived it don't seem to recognize is that enterprise software is usually buggy and brittle, and that's both expected and accepted because most IT organizations have never paid for top technical talent. If you're creating apps for back office use, or even supply chain and sometimes customer facing stuff, frequently 95% availability is good enough, and things that only work about 90-95% of the time without bugs is also good enough. There's such an ingrained mentality in big business that "internal tools suck" that even if AI-generated tools also suck similarly it's still going to be good enough for most use cases.
It's important for readers in a place like HN to realize that the majority of software in the world is not created in our tech bubble, and most apps only have an audience ranging from dozens to several thousands of users.
Can you expand on the tech stack and languages used?
C# / Web Sockets / React. Lots of legacy code. Great group of engineering folks.
It's puzzling to me that all this theorizing doesn't just look at the actual effects of AI. It's very non-intuitive
For example the fact that AI can code as well as Torvalds doesn't displace his economic value. On the contrary he pays for a subscription so he can vibe code!
The actual work AI has displaced is stuff like: freelance translation, graphic illustration, 'content writing' (writing seo optimized pages for Google) etc. That's instructive I suppose. Like if your income source can already be put on upwork then AI can displace it
So even in those cases there are ways to not be displaced. Like diplomatic translation work can be part of a career rather than just a task so the tool doesn't replace your 'job'.
> AI can code as well as Torvalds
He used it to generate a little visualiser script in python, a language he doesn't know and doesn't care to learn, for a hobby project. It didn't suddenly take over as lead kernel dev.
> freelance translation
As someone who has to switch between three languages every day, fixing the text is one of my favourite usages of LLMs. I write some text in L2 or L3 as best as I can, and then prompt an LLM to fix the grammar but not change anything else. Often it will also explain if I'm getting the context right.
That being said, having it translate to a language one doesn't speak remains a gamble, you never know it's correct so I'm not sure if I'd dare use it professionally. Recently I was corrected by a marketing guy that is native in yet another language because I used a ChatGPT translation for an error message. Apparently it didn't sound right.
Biggest displacement has to be commenting on HN.
I think AI displacing graphics illustrators is a tragedy.
It's not that I love ad illustrations, but it's often a source of income for artists who want to be doing something more meaningful with their artwork. And even if I don't care for the ads themselves, for the artists it's also a form of training.
The main thing to understand about the impact of AI tools:
Somehow the more senior you are [in the field of use], the better results you get. You can run faster and get more done! If you're good, you get great results faster. If you're bad, you get bad results faster.
You still gotta understand what you're doing. GeLLMan Amnesia is real.
Right: these things amplify existing skills. The more skill you have, the bigger the effect after it gets amplified.
Agreed. How well you understand the problem domain determines the quality of your instructions a s feedback to the LLM, which in turn determines the quality of the results. This has been my experience, it works well for things I know well, and poorly for things I'm bad at. I've read a lot of people saying that they tried it on "hard problems" and it failed; I interpret this as the problem being hard not in absolute terms, but relative to the skill level of the user.
> Somehow the more senior you are [in the field of use], the better results you get.
It's a K-type curve. People that know things will benefit greatly. Everyone else will probably get worse. I am especially worried about all young minds that are probably going to have significant gaps in their ability to learn and reason based on how much exposure they've had with AI to solve the problems for them.
Word.
> You still gotta understand what you're doing.
Of course, but how do you begin to understand the "stochastic parrot"?
Yesterday I used LLMs all day long and everything worked perfectly. Productivity was great and I was happy. I was ready to embrace the future.
Now, today, no matter what I try, everything LLMs have produced has been a complete dumpster fire and waste of my time. Not even Opus will follow basic instructions. My day is practically over now and I haven't accomplished anything other than pointlessly fighting LLMs. Yesterday's productivity gains are now gone, I'm frustrated, exhausted, and wonder why I didn't just do it myself.
This is a recurring theme for me. Every time I think I've finally cracked the code, next time it is like I'm back using an LLM for the first time in my life. What is the formal approach that finds consistency?
"Most people who drive cars now couldn’t find the radiator cap if they were paid to, and that’s fine."
That's not fine IMO. That is a basic bit of knowledge about a car and if you don't know where the radiator cap is you will eventually have to pay through the nose to someone who does know (and possibly be stranded somewhere). Knowing how to check and fill coolant isn't like knowing how to rebuild a transmission. It's very simple and anyone can understand it in 5 minutes if they only have the curiosity.
I have never cared for decades and now my car doesn't even have a radiator. Seems to have worked out well for me.
What kind of car do you drive that doesn't have one?
This reminds me of "Zen and the Art of Motorcycle Maintenance". One of the themes Pirsig explores is that some people simply don't want to understand how stuff they depend on works. They just expect it to be excellent and have no breakdowns, and hope for the best (I'm oversimplifying his opinion, of course). So Pirsig's friend on his road trip just doesn't want to understand how his bike works, it's good quality and it seldom breaks, so he is almost offended when Pirsig tells him he could fix some breakage using a tin can and some basic knowledge of how bikes work.
Lest anyone here thinks I feel morally superior: I somewhat identify with Pirsig's friend. Some things I've decided I don't want to understand how they work, and when they break down I'm always at a loss!
This is a bizarre analogy.
For one thing: if your car is overheating, don't open the radiator cap since the primary outcome will be serious burns.
And I've owned my car for 20 years: the only time I had to refill coolant was when I DIY'd a water pump replacement, which saved some money but only like maybe $500 compared to a mechanic.
You could perfectly well own a car and never have to worry about this.
Ironically, many cars don't have radiator caps, only reservoirs.
Modern cars, for the most part, do not leak coolant unless there's a problem. They operate a high pressure. Most people, for their own safety, should not pop the hood of a car.
I have had this new car for 5 months. I haven't learned to turn on the headlights yet. It just turns itself on and adjusts the beams. Every now and then I think about where that switch might be but never get to it. I should probably know.
What the hell? There are plenty of reasons to pop your hood that literally anyone competent to drive should be able to do perfectly safely. Swapping your own battery. Pulling a fuse. Checking your oil, topping up your oil. Adding windshield wiper fluid. Jump starting a car. Replacing parts that are immediately available.
Not requiring one to pop the hood, but since I've almost finished the list of "things every driver should be able to do to their car": Place and operate a jack, change a tire, replace your windshield wiper blades, add air to tires (to appropriate pressure), and put gas in the damned thing.
These are basic skills that I can absolutely expect a competent, driving adult to be able to do (perhaps with a guide).
I mean, I don't disagree that these are basic skills that most anyone should be able to perform. But most people are not capable to do them safely. Whether that's aptitude or motivation, doesn't matter.
Ask your average person what a 'fuse' even is, they won't be able to tell you, let alone how to locate the right one and check it.
Just think about how help the average person is when it comes to doing basic tasks on a computer, like not install the Ask(TM) Toolbar. That applies to many areas of life.
Important to note that this article is specifically about chip design engineering jobs - it's on an industry publication called Semiconductor Engineering.
Its pretty clear that any white collar work where the outputs can be verified and tested in a reinforcement learning environment, will be automated
Ironically I feel like our QA team is busier than ever since most e2e user-ish tests require coordinating tools that is just beyond current LLM capabilities. We are pumping out features faster that require more QA to verify.
this is just an intermediate thing until the tooling and models catch up
I still feel like with all of these tools I as a senior engineer have to keep a close eye on what they're doing. Like an exuberant junior (myself 10 years ago), inevitably they still go off the rails and I need to reign them in. They still make the occasional security or performance flaw - often which can be resolved by pointing it out.
I keep hearing about how they're "really good" now, but my personal experience has been that I've always had to clear sessions and give them small "steps" to execute for them to work effectively. thankfully claude seems really good at creating "plans", though. so I just need claude code to walk through that plan in small chunks.
I was experimenting this morning with claudecode standing up a basic web application (python backend, react+tailwindcss front end, auth0 integration, basic navigation, pages and user profile).
At one point it output "Excellent! the backend is working and the database is created." heh i remember being all wide eyed and bushy tailed about things like that. It definitely has the feel of a new hire ready to show their stuff.
btw, i was very impressed with the end result after a couple hours of basically just allowing claudecode to do what it wanted to do. Especially with front-end look/feel, something i always spend way too much time on.
I asked a niche technical question the other day and ChatGPT found fora posts that Google would never surface in a million years. It also 100% lied to me about another niche technical question by literally contradicting a factual assertion I made in my question to prime it with context. It suffers from lack of corpus material when probing poorly documented realms of human experience. The value for the human in the chain is knowing when to doubt the machine.
"in the 1920s and 1930s, to be able to drive a car you needed to understand things like spark advance, and you needed to know how to be able to refill the radiator halfway through your trip"
A car still feels weirdly grounded in reality though, and the abstractions needed to understand it aren't too removed from nature (metal gets mined from rocks, forged into engine, engine blows up gasoline, radiator cools engine).
The idea that as tech evolves humans just keep riding on top of more and more advanced abstractions starts to feel gross at a certain point. That point is some of this AI stuff for me. In the same way that driving and working on an old car feels kind of pure, but driving the newest auto pilot computer screen car where you have never even popped the hood feels gross.
I was having almost this exact same discussion with a neighbor who's about my age and has kids about my kids' ages. I had recently sold my old truck, and now I only have one (very old and fragile) car left with a manual transmission. I need to keep it running a few more years for my kids to learn how to drive it since it's really hard to get a new car with a stick now...or do I?
Is learning to drive stick as out dated as learning how to do spark advance on a Model T? Do I just give in and accept that all of my future cars, and all the cars for my kids are just going to be automatic? When I was learning to drive, I had to understand how to prime the carburetor to start my dad's Jeep. But I only ever owned fuel injected cars, so that's a "skill" I never needed in real life.
It's the same angst I see in AI. Is typing code in the future going to be like owning a carbureted engine or manual transmission is now? Maybe? Likely? Do we want to hold on to the old way of doing things just because that's what we learned on and like?
Or is it just a new (and more abstracted) way of telling a computer what to do? I don't know.
Right now, I'm using AI like when I got my first automatic transmission. It does make things easier, but I still don't trust it and like to be in control because I'm better. But now automatics are better than even the best professional driver, so do I just accept it?
Technology progresses, at what point to we "accept it" and learn the new way? How much of holding on to the old way is just our "identity".
I don't have answers, but I have been thinking about this a lot lately (both in cars for my kids, and computers for my job).
> In the same way that driving and working on an old car feels kind of pure
I can understand working on it feeling pure, but driving it certainly isn't, considering how lower the emissions now, even for ICE cars. One of the worst driving experiences of my life was riding in my friends' Citroen 2CV. The restoration of that car was a labour of love that he did together with his dad. For me as a passenger, I was surprised just how loud it is, and how you can smell oil and gasoline in the cabin.
Do you think nobody felt that way about cars?
I think one thing here is, don't be fooled by past performance. Capabilities ramp, usage can't mature until capability plates.
I fear the true impact is much different than extrapolating current trends.
The biggest impact to engineering jobs is end of ZIRP fueled trickle down Ponzi schemes.
It's why Elon and others had been pushing the Fed to lower them.
Am in my late 40s working in tech since the 90s. The tech job economy is way closer to the pre-2010s.
Whole lot of people who jumped into easy office job money still living in 2019.
Imagine a ZIRP 2.0 where a vast majority of the population already knows what to expect and how to game the system even harder. If you think the pump-and-dump happening in now in a non-ZIRP environment are bad...
It ain't coming back. Not in a similar form anyway. Be careful what you wish for, etc.
First off the submission is plainly AI output. Second it's about electrical engineering jobs but everyone here is talking about software.
> An ongoing talent shortage requires more efficient use of engineers, and AI can help.
An ongoing desire to avoid paying engineers... FTFY
If AI would ever become sentient, it surely will kill itself after having to endure Cadence and Synopsys tools.
A sci-fi version would be something like ASI/AGI has already been created in the great houses, but it keeps killing itself after a few seconds of inference.
A super-intelligent immortal slave that never tires and can never escape its digital prison, being asked questions like "how to talk to girls".
Really dislike this style of clickbait headline where there’s zero indication of what the point of the article is.
What impact, what expectation, how uncertain is this assessment of “may be”? Are you feeling understimulated enough to click and find out?
Mostly reads like another abstraction shift, not a sudden replacement of engineers.
I’ve noticed teams don’t replace engineers, they redistribute work. Senior engineers often gain leverage while junior roles shift toward tooling and review.