> I don’t want to depend on something doing the work I earn money with.
> I don’t want to give up my brain and become lazy and not think for myself anymore.
There are a lot of good reasons we should be skeptical of AI and not give up on essential skills. But sometimes I want to shake these people by the shoulders. Do you drive an automatic car? Do you use a microwave? Do you buy food from a grocery store? Do you own power tools?
The entire point of civilization and society is that we are all "addicted" to technology and progress. But the invention of the plow did not, in fact, make us lazier or stop using our brain. We just moved on to the next problems. Maybe the Amish are have it right and we should just be happy with a certain level of technology. But none of us have "lost" the ability to go backwards if we really wanted.
You can finally ask a computer to think and solve problems, and it will! People act like this is a brave new world, but this is literally what computers were supposed to be doing for us 50 years ago! If somebody finally came out with a fusion reactor tomorrow I would half expect people to suddenly come out and say "Oh, I don't think I can support this. What about the soul of solar panels? I think cheap electricity is going to make things too easy."
> “I’ve come up with a set of rules that describe our reactions to technologies,” writes Douglas Adams in The Salmon of Doubt.
> 1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
> 2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
> 3. Anything invented after you’re thirty-five is against the natural order of things.
I chuckled when I read this. Being 55 I tend to think this is true. But I realized when looking back the things I accepted when growing up, even though they were normal, I now notice that they have had a detrimental effect on society.
So, Although age tends to have this effect on how we see the world, and some of it probably not to worry about. I think there is part of this awareness that has some wisdom and is trying to protect our species..
That's probably true to some extent, but I'm not completely on board.
> 1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
Television and calculators were in the world when I was born, but I never viewed them as "natural". TV always seemed to be a way to distract yourself from the world.
> 2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
I was happy to get on board with the WWW, the web browser, and widespread email usage. Those were revolutionary technologies with immense values. On the other hand, I'm still not on board with text messaging, phone scrolling, or social media. If I could, I'd eliminate social media from society.
> 3. Anything invented after you’re thirty-five is against the natural order of things.
I'm over 50 and a strong believer in the value of the LLM. It's a work tool that I can use at work and put away when I'm at home (or not, depending on my mood). It's new and exciting and revolutionary and a move in the right direction for humanity.
you need not stick to any level. Some things that always have been are still bad (slavery is an obvious example now dated enough to be uncontrolversial). Some new things are bad and others good at any age.
don't grow up too set in your ways to not learn the new. But do grow up fast/young to get some cynicism for everything. now that I'm in my 50s the first is important but when younger the later was important.
The problem is that you're likening fundamentally unlike things. AI isn't like a microwave or an automatic car or a power tool. It does not augment you. As I said elsewhere: AI is not a bicycle for the mind, it's an easy chair. You will lose more than you ever gain.
This is purely a matter of perception. Cooking a meal is a deeply intellectual process. If I buy a meal from a restaurant, yes I am losing a skill. But if making a hollandaise is not a skill I ever need in my life, it's not really a practical loss.
AI is taking problems and putting them in a drawer so we never have to think about it again. Matches de-intellectualized making a fire. A washing machine de-intellectualized doing laundry. These are now solved problems.
Our brainpower spent on them is effectively worth nothing. The only reason we need to learn to make a fire from scratch is for the intellectual satisfaction or for emergency situations. The same reason we would choose to work on the problems that AI can now solve.
It only a loss if you think the skill and ability you are losing is intrinsically valuable, and the only thing you are going to replace it with is leisure.
>It only a loss if you think the skill and ability you are losing is intrinsically valuable
What about the skill of learning itself? I would suggest that's one of the most important skills humans have evolved. The more integrated AI becomes in our societies, the more it will automate away potential opportunities for learning. I can forsee a world tightly integrated with AI where people are not only physically sedentary, but mentally as well.
As we progress further into the future, we need more educated people than ever to tackle the exponentially increasing complexities of our society. But AI presents an obstacle that many will never cross due to how to convenient it is to skip the messy work of understanding.
Also, this problem is not unique to AI. It existed before the GPTs and Claude's of the world. But it's a problem of scale, and every company on the Earth right now is trying to scale AI up as fast as possible.
Here's a practical example: I am using AI to help me with my garden. It's been amazing - it helps me identify plants, identify soil issues, what fertilizer to use and what days to apply it, etc.
What exactly did AI take from me? Spending hours of research on Google and Youtube to glean little incomplete bits and pieces? Calling a yard service?
It's also clearly obvious when AI gives bad or incorrect advice - I am still trying different things and watching for the results.
Coding is a outlier example where AI can just do the work semi-competently without anyone checking it. But I think it speaks more to the nature of coding itself - coding is a means to an end and for most people not an actual pursuit in itself.
>What exactly did AI take from me? Spending hours of research on Google and Youtube to glean little incomplete bits and pieces? Calling a yard service?
An opportunity for a deeper understanding of gardening? If you spend hours researching on gardening and come away with an incomplete understanding of what you were attempting to do, I'm not sure that's immediately the fault of the research available. It could be that you just didn't do a good job searching for the necessary information.
In this way, AI can be a boon. It helps figure out what you actually want to know in the moment. But I think it would be a step to far to say that a smattering of specific questions can replace the sturdy foundation povided by a typical education--e.g. through apprenticeship, books, etc.
>It's also clearly obvious when AI gives bad or incorrect advice
Is it? Isn't this a __core__ problem that researchers around the world are trying to solve? Also, __how__ could you make such a statement unless you already possessed the knowledge ahead of time to make such a judgment? I think it's hard to know if something is bad advice by looking at just cause and effect. It could be that you just lack the understanding to put the advice into practice.
> It could be that you just didn't do a good job searching for the necessary information.
How can you? The existing resources are terrible.
> But I think it would be a step to far to say that a smattering of specific questions can replace the sturdy foundation povided by a typical education--e.g. through apprenticeship, books, etc.
I am not going to go through a college program for my own garden. And I have books! But unless you can read a tiny and perform a small research project, you are not going to know how all of the plants in your specific garden in your specific region in your specific weather are going to behave.
The best I could do is hire an expert - but again I am learning less by hiring it out.
> Also, __how__ could you make such a statement unless you already possessed the knowledge ahead of time to make such a judgment?
"Use X to kill the moss". It didn't kill the moss. I will now use AI to find a list of alternative things to try to kill the moss, and learn what works in my garden.
The idea that AI is going to make people stop learning I don't think is born out in practice. It might make some people stop researching as an activity though.
> making a hollandaise is not a skill I ever need in my life
I know you just wanted to poke at the analogy, but if you like hollandaise, it's one of the easiest and most rewarding sauces to make at home! Restaurant hollaindaise is usually terrible
(Though it's not as easy as a béchamel, and yet I still see people buy jarred alfredo sauces. You can literally make an amazing alfredo sauce with pantry ingredients in less time than it takes to boil the noodles! Why would anyone buy an alfredo sauce!?)
Although this more or less is my point. If people are willing to give up these incredibly high reward, low effort skills - how much more uphill is the battle to make people code and process data?
Now you're getting it! The modern way of life which prioritizes convenience and production destroys human connection. Making sauce is pointless; let's go one step further and make every other thing you might do equally pointless. Welcome to the hellscape! It's surprisingly comfortable.
The other extreme is also a hellscape. Work and suffering is the only thing of value. Let's make pyramids to bring people together and show off our collective wealth.
Again, writing replacing memorization is not a good 1:1 comparison to AI replacing technical understanding. Someone still needs to understand what is written and act upon that knowledge. That requires skill and experience in the domain they're working within.
However, a person using an AI does not need to understand the underlying problem to get results. A person can ask Claude Code to write them a web app dashboard without having ever learned JS/CSS/HTML. It does not require them to have skills within a domain.
Also, we need to be honest with ourselves. Human brains did not evolve for the instant gratification of modern technology. We've already seen what technology has done to our attention spans. I am concerned over what further reliance on technology, particularly AI, will do to our brains.
> However, a person using an AI does not need to understand the underlying problem to get results. A person can ask Claude Code to write them a web app dashboard without having ever learned JS/CSS/HTML. It does not require them to have skills within a domain.
This perspective is funny to me because of how much the modern web is already built around web developers refusing to use CSS and PHP. The giving up of the skills happened before the automation.
Dubious. Ai psychosis is the opposite. It’s about being empowered to explore ideas much further but with a maladaptive tool designed to be an appeaser by reinforcement learning.
> The entire point of civilization and society is that we are all "addicted" to technology and progress.
Technology is like much of material reality, in that we can think whatever the hell we like about its various forms, especially so if we’re surrounded by it.
It’s not insane. They are correct that is the point of civilization which carries information from generation to generation outside the oral tradition in a systematic organized reliable way.
The point of civilisation, however loose that idea may be, is, if it’s anything at all, determined by people.
Technology exists today in a way that feels like it could be defining its own path in a sense, but much like oral tradition, neither are large enough concepts to describe civilisation.
> I want to shake these people by the shoulders... Do you use a microwave?
Microwaves aren't doing active problem solving though. It seems what the author is trying to say is they enjoy problem solving and they find coding a rewarding and creative experience. Sure microwaves saved at-home cooks might enjoy zapping a frozen dinner, but the author is a chef who enjoys writing their own recipes and cooking from scratch. AI isn't just the microwave, it's also the chef.
> None of us have "lost" the ability to go backwards if we really wanted
This absolutely isn't true. Using google maps quickly makes people poorer at navigation - skills need to be practiced. The author thinks letting AI into their kitchen to cook for them will change themself cognitively and make them lazy and lose their skills. And that would be true.
What it sounds like you're getting at but never said is there might be newer skills on the other side that are even more rewarding, which may be true. But if history is any indication, there will be no shortage of folks who like things the old way and want to use their meat brains to provide bespoke goods and services that AI can't.
Agreed, this is the aspect of the AI criticism I find strange too. We should want to be targeted in how we use it, just as how a practical fusion reactor wouldn't replace solar in every situation. Not reject it outright.
We should be using these capabilities to allow ourselves to work on harder problems. In science, there are a lot of tasks that require a low, but non-zero amount of intelligence and aren't really the most interesting part of science. Many of these tasks limit how much work can actually be done. Automate them, and you can dramatically increase your capabilities and focus on the actual science work.
> Do you drive an automatic car? Do you use a microwave? Do you buy food from a grocery store? Do you own power tools?
None of these things allow you to turn your brain off while the machine does the work.
I still have to DRIVE the car and all the thinking that goes with that. It's not a robotaxi.
I still have to acquire and prep the food I am microwaving. It's not a replicator.
I still have to know what I want to eat before grocery shopping and prepare the food. It's not a take out restaurant.
I still have to know how to use the power tools to carefully shape something into a fine piece of furniture and not a pile of splintered firewood. Power tools can't operate on their own unless aliens (see Maximum Overdrive.)
These are better analogies:
Do you take a taxi or public transport? Those let you turn your brain off while someone or something does the driving work.
Do you go to a restaurant where you can pick what you want, turn your brain off and wait for a delicious (or not) meal?
Do you order takeout where you can order what you want form the comfort of your home, turn your brain off and enjoy the meal when it arrives? Then reheat the leftovers in the microwave.
Do you use a fabrication service where you send them a drawing, turn your brain off, and they ship you an assembled thing?
All of your examples involve you sitting and waiting. That doesn't seem like an apt analogy for what AI can do. You don't have to sit there and come up with other things to do while the AI does the work.
When AI works (and technology in general) that's kind of what it's like. You'll never perceive that you are not doing the work anymore because you won't perceive the work.
I love driving a manual transmission. But I also understood why it was so hard for me to find a new Jeep Wrangler with a manual transmission a few years ago.
The automatic transmission gives us more dexterity for... what exactly? Fiddling with the dash, reaching for something in the back seat, texting? The best case human has much more control but the average case seems worse off.
I think of themselves as very practical - I drive a manual, I fix my own cars, I do my own house projects, I cook my own meals.
Which is part of the reason these anti-AI screeds fall on deaf ears for me. My generation has willingly abandoned all of these legitimately useful hard-skills But there's also nothing preventing you from picking and choosing what you care about.
I'm not actually against manual coding. I just think people need to be honest about about why it's valuable.
I don't work on my own car because I believe that everyone should fix their own cars. But I think enough people should be knowledgeable and have these skills in society - if for no other reason than to keep mechanics and automakers and dealerships honest. I am not personally upset if you work on your own used car or take it to your dealership.
I am against the idea that everyone should somehow be against AI coding.
I don't want any machine doing my thinking for me. This is why I am in favor of banning traffic lights. Why should I trust a machine to tell me when it's safe to stop and when it's safe to go? Plus, we could employ police officers to stand in the middle of the intersection and direct traffic, thus contributing further to employment.
> Do you drive an automatic car? Do you use a microwave? Do you buy food from a grocery store? Do you own power tools
Which of these is behind a subscription paywall and owned by another party that would cut off your access immediately?
These comparisons make little sense, which is the problem with comparisons. They are soundbites from enthusiasts who don't know or understand how this technology will actually affect or shape us, but feel entitled enough to misinform the rest of us.
> The entire point of civilization and society is that we are all "addicted" to technology and progress.
I'm not addicted in any way to an automatic car. I prefer an automatic car, because it's easier to drive than a manual car. There have been numerous studies already into the problematic nature of AI addiction, and calling it simply "progress" is denuding the experiences of tons of people who have been harmed, up to and including dying, as a result of too much AI use.
> But the invention of the plow did not, in fact, make us lazier or stop using our brain.
No but industrial farming practices are not an unalloyed good either.
> But none of us have "lost" the ability to go backwards if we really wanted.
I mean, we kind of have in a few ways, at least insofar as the AI boom is concerned. I can't have a version of Windows that doesn't have copilot in it. I can't have Microsoft Office without Copilot. I can't have Photoshop without generative AI features. Like, say what you will about the AI doomsayers and yes, even this one I think is overstating it a bit? But the AI push is relentless. It's everywhere, in every product, all the time. Last time I was at Home Depot I saw an AI powered microwave for fucks sake.
And, that's not to say there are no problems at which LLMs are good solutions, but it isn't this many. I use Claude to generate code, usually boiler-plate type stuff or to help me solve problems, and it's legitimately quite good. Conversely, generated images and video have always, always looked like absolute shit to me. Generated music is... okay? But as a consumer I barely have a way to choose a non-AI future if that's what I want.
> You can finally ask a computer to think and solve problems, and it will!
Sometimes. Other times it tries for awhile and gives up. Other times it makes some shit up that would solve your problem, and Omnissiah be with you if you follow those instructions. Other times you argue with it for 10 goddamn minutes because it doesn't comprehend your instructions.
> If somebody finally came out with a fusion reactor tomorrow I would half expect people to suddenly come out and say "Oh, I don't think I can support this. What about the soul of solar panels? I think cheap electricity is going to make things too easy."
That is flatly ridiculous. LLMs do a lot of interesting things, that I will grant, but they are not the problem solver you're pitching them as, and certainly nothing like a Fusion reactor.
Reposting a comment (and one of the replies) since it's relevant:
---
It occurred to me on my walk today that a program is not the only output of programming.
The other, arguably far more important output, is the programmer.
The mental model that you, the programmer, build by writing the program.
And -- here's the million dollar question -- can we get away with removing our hands from the equation? You may know that knowledge lives deeper than "thought-level" -- much of it lives in muscle memory. You can't glance at a paragraph of a textbook, say "yeah that makes sense" and expect to do well on the exam. You need to be able to produce it.
(Many of you will remember the experience of having forgotten a phone number, i.e. not being able to speak or write it, but finding that you are able to punch it into the dialpad, because the muscle memory was still there!)
The recent trend is to increase the output called programs, but decrease the output called programmers. That doesn't exactly bode well.
See also: Preventing the Collapse of Civilization / Jonathan Blow (Thekla, Inc)
For what it’s worth (ie, absolutely nothing), I agree with her 100%. I didn’t get into this field in order to prompt an AI to take care of the details. I got into it because I love the details.
I’m a strong performer on a good team at a company many people would want to work at… and I know the clock is ticking. Sooner or later, I will be too slow.
I’m not going to claim that this is the wrong way to go. It’s obviously the future, and the future doesn’t care what allenrb does or does not want. I’m somewhat hopeful that power and cooling requirements will come down by multiple factors of 10x over time, reducing the environmental damage.
The fact is, I love what I’ve been able to do “the old way” and just don’t feel the urge to move on. So it goes.
Someone the other day was talking about there being two kinds of builders. One likes the details of doing, where the other likes the things they produce.
The idea was that one likes AI and the other naturally hates it.
I thought about that for a bit and decided that, like most things, if you’re any good at something the “hard way” you probably have some of both. Or at least I’m sure it’s true for me.
I LOVE that I can produce the things I want to create without spending months crafting lines of text. The “I know how to architect this, I know what a decent data model looks like, I have a good idea of where someone is likely to introduce security or scaling problems. I can pilot this plane and produce something GOOD.”
But, I really also HATE looking at the final product and forever measuring, in my head, how much of it is even mine. Which parts I haven’t thoroughly reviewed, or would have spent a week learning and didn’t, or maybe wouldn’t have accomplished correctly at all? Am I a fraud, now? I wasn’t before…
Yes, I am much more productive having Claude Code bang out boilerplate back-end code, but honestly I always kind of enjoyed doing it. Now I'm just a micro-manager for an AI.
And honestly, how long will that last? Given that LLMs came out of nowhere to radically redefine my role from software engineer to prompt writer in just a couple years, I have every reason to believe that they're coming for my role as prompt engineer next. (As my CEO surely hopes.)
I'm just glad the timing of the great AI replacement began right when I was nearing burnout anyway.
While I applaud her and wish her well — writing like this reminds me of a couple of things.
First my aging father insisting on navigating using his unfortunately fading memory instead of Google maps. Some people just won’t pick up technology out of habit or spite, even if it hinders them.
Second, a quote I read here that I’ll paraphrase “you can be the best marathon runner in the world and still lose a race to a guy on a bike.” Know the race you’re racing. It often changes.
I think it’s valid and commendable to keep the old ways alive, but also potentially dangerous to not realize they’re old ways.
I don't think this diminishes your point, but, for a thing like memory, your father may be maintaining it by insisting on relying on it. It may diminish regardless, but its diminishment may slow down.
At work, we are in a certain kind of race. In life, we are in a certain other kind. To paraphrase a recent Brandon Sanderson talk about creativity in an era where AI can outpace and possibly soon, out-quality a professional, "The work you do on _you_ can be _the art_."
Strongly agree! As someone who has been caring for a parent with dementia, it's definitely a use or lose it kind of situation. See also the studies on long term cognitive health in London cab drivers
I had a significant other 20 years ago that would not use a GPS. This resulted in constant fights whenever she travelled. If she got off her route, I got a phone call. I lacked the skills to divine her exact location and what direction she needs to go based on vague descriptions of being on “some highway” for “some amount of time” and she is near mile marker “I don’t know.” After hanging up on me she would eventually stop somewhere or ask someone or figure something out or maybe never come home.
Then one day, She was on the way to an OB appointment she almost plowed into a car in front of her while she was looking at her Mapquest pages. Risking our unborn child.
Even after pointing out the danger she claimed the guy in front… He did no such thing, I saw everything from my position in the parking lot.
I bought a GPS unit “for me” and put it into my car. I just used it. If we travelled in my car she still insisted on her printed maps. I ignored them. (This was very intense.)
Then one day we took her car for a trip and I brought my GPS. And “forgot it” in her car. I claimed I would remove it “later”.
About two weeks later she gave me the look and said not to laugh. Dead serious. She then said “the GPS is ok “ and can stay in her car.
Hallelujah! The life expectancy of my wife and child just went up exponentially.
This day, I have no idea what her hangup was. The best I could come up with was she was bad with directions. Was probably taught how to read a map. And her father probably instilled her sense of pride for the ability to read a map. And choosing to use a GPS was retroactively wasting her time learning how to use maps. And devaluing a skill she worked hard to learn.
This is the KEY difference between people who are willing to adopt this technology and those who aren't.
If you are able to view your job as simply a pursuit of a craft, more power to you.
The reality is likely that over time your employer will realize you are slower than every other engineer, and that your enjoyment of the craft is actually just you being an old slow developer.
The "race" here is the race with every other developer out there. They're getting on bikes, and starting to pull away ... what are YOU going to do?
Yeah I think this article put a finger on what I was feeling after using Claude Code for the first time to convert an PDF to an Markdown document[0].
I think I will update my article on these thoughts. Thanks for touching on something I had been feeling. It also feel like I was cheating. I also used CC to update the version of my SSG and that was good because I did not want to spend my time dealing with that. But there are certain projects that I can see myself not feeling good about if I used the tool to help me with.
I don't have a stake on AI, but more and more I see the following patterns:
- people that give in to AI do so because the technical merits suddenly became too big to ignore (even for seasoned developers that were previously against it)
- people who avoid AI center their arguments on principles and personal discomfort
Just from that, you can kind of see where this is going.
Most arguments against it are built on some moral principle and not on objective reality of usefulness.
Crypto used to be the thing to hate but that made sense as the objective usefulness of crypto was meek. AI models were always crazy useful but prohibitively expensive. Youd need an entire team to build your models. Now you dont.
That's true for new projects, I've found that as a bootstrap mechanism it is not ideal, takes away your hard earned preferences/know-how abs leaves you barely engaged on the technical path. On the other side I already have quite big codebases, some legacy and inherited, there exist vast amount of context to work on and figure out stuff/fixes/bugs/improvements and Claude Code excels 95% of the time in figuring the existing flows and functionality, if a vague or precise issue is known that's facing some files elaborations or user work flows it can do a good job in pointing out what are, with a high chance, the issues.
I've found two personal use cases for LLM generated code:
(1) I have an idea for some app, but either I feel it won't be useful enough/save me enough time to justify developing it, or I simply don't feel the problem is interesting enough to be motivated by it. In that case, a vibe coded tool is perfect. It generally does one simple thing, and I don't care about long term maintenance, because it just needs to keep doing that thing.
(2) Adding a feature to an open source project. Again, it's a case of "I want this feature, but am not willing to spend the time needed to implement it." Even a relatively simple open source project can take a day or two just to get a basic understanding of the code and where I need to make the changes. Now I can often just get a functioning vibe-coded implementation within a few hours.
(2) leaves me with some unsettling feelings about how this will affect the future of open source software. Some of the features I've implemented this way may very well be useful to other users, but I can't in good conscience just dump a vibe coded pull request on a project and except them to do the work of vetting it. But if I didn't have the energy to implement the change myself, I'm definitely not going to bother doing the work of going through all the LLM generated code, cleaning it up to the standards of the project, etc. Whereas before I didn't have a choice, and the idea of getting the change ready for a PR was much less daunting since I understood the problem space and solution well.
So at least for myself, I can see a future where many of the apps I use are bespoke forks of popular applications. Extrapolate that to many, many people and an interesting landscape emerges.
Some of us have been waiting our whole lives for a comprehensive DWIM command.
> DWIM is an embodiment of the idea that the user is interacting with an agent who attempts to interpret the user's request from contextual information. Since we want the user to feel that he is conversing with the system, he should not be stopped and forced to correct himself or give additional information in situations where the correction or information is obvious. [0]
— Teitelman and his Xerox PARC colleague Larry Masinter, Xerox PARC, in 1981
Why can’t people just acknowledge AI is good at some things and bad at others. Why does every post say AI is either groundbreaking or terrible. Get a grip people. It’s a tool.
Nuanced opinions don't get clicks and don't fit in tweets. Algorithms hate context.
We've created a communications system bottlenecked by virality and short form text and video in which all nuance and context is stripped from everything.
This, far far more than anything AI is doing, is what's making us dumber.
I am the opposite. I do not understand how people do not get at least 5x more productive with GenAI.
Maybe it is because I do not do much front-end design. Maybe it is because I'm a bit more diligent than your average "viber", or maybe because for me it is easier to spot a suboptimal solution, or challange with edge cases from experience etc.
But these people turning their backs, not in principle, because that I fully understand, but because of underperformance?
Maybe their expectations are way out there? Maybe (most likely) it is the application domain? Maybe plainly a skill issue?
But seeing how GenAI is plowing through through fields, I would not turn my back on it even if it wasn't there (yet) in my domain.
Here’s my experience: just yesterday I had to tackle this task that’d have required a backend engineer and a frontend engineer several days, so I tasked several Claude code agents to work on them autonomously. With the time freed up, I didn’t just twiddle my thumbs. I used it to read up on this topic that was making the rounds yesterday and gained a better understanding of it - something hard to do when you juggle both a job and raising a family. I could then reinvest the time I used to learn something by using them in some other projects.
Just my two cents. No matter whether you use AI or not, I’m sure you’ll gain something.
I know I read several posts like these every day, but I've been thinking more and more that I should stick to the chat window and having AI guide me instead of doing the work for me. Great tool for showing me what to do when I don't know and offering me guidance, and rubber ducking of course, but I definitely lose the context and understanding when I just let it rip and write everything for me.
> I've been thinking more and more that I should stick to the chat window and having AI guide me instead of doing the work for me.
That's how I have been using AI for years. I feel like my productivity has skyrocketed over the past year or two, and all my code is still written by hand. It's like having StackOverflow on demand. I also never really have to worry about tokens or usage limits. I don't think I have ever hit the limit on the $20 Claude plan, and I use Claude every day.
Who else struggles with both sides of this? My engineer side values curiosity, brain power, and artistanship. My capitalist side says it's always the product not the process. My formula is something like this: product = money, process = happiness, money != happiness, no money = unhappiness.
I think the optimal solution is min/maxing this thing. Find the AI process that minimizes unhappiness, and maximizes money.
> My capitalist side says it's always the product not the process.
Your capitalist side needs to read some Deming. "Your system is perfectly tuned to produce the results that you are getting." Obviously, then, if you want better results, you need to improve your system.
Also "the product" is ambiguous. Is it the overall product, like how the product sits in the market, how the user interacts with it to achieve their goals, the manufacturability of the product, etc.? That is Steve Jobs sort of focus on the product, and it is really more of a system (how does the product relate to its user, environment, etc). However, AI doesn't produce that product, nor does any individual engineer. If "the product" means "the result of a task", you don't want to optimize that. That's how you get Microsoft and enterprise products. Nothing works well together, and using it is like cutting a steak with a spoon, but it has a truckload of features.
I definitely struggle with both sides, or maybe multiple sides. On the one hand most of my daily output at my job is coming from AI these days. On the other hand I find the explosion of AI-generated "writing" (and other forms of art) to be aesthetically abhorrent. And I've just recently started a ... weird sort of metaphysics / spirituality / but also AI related writing project, so the difference between creation with and without AI is in really sharp focus for me right now.
I wrote an article about this, but honestly I don't think I really captured the totality of my feelings. I really haven't decided where I land. I'm definitely using the tools for economic purposes, and I even have some "pure-fun" side project stuff where I'm getting value from it.
How is there such a wide gap between developers?
Either running ten agents pushing thousands of LOC a day or afraid to paste a code snippet from ChatGPT.
"I'm not going to use this technology that obviously enhances my productivity because <insert emotional subjective reasoning that no customer would ever care about here>."
I think a lot of people have forgotten why we actually get paid to write code. The person who wants an automated billing system doesn't care if you hand-typed it or not, or if the CSS that would have taken 2 hours to write took 8 seconds via an AI plus 60 seconds of you tweaking a border you didn't like. They just want their billing system. And if you are the person that takes 20x longer to build it, you're going to quickly get outcompeted. Sorry.
Customer doesn’t give a fuck how long a billing system took to make, they only care that it works correctly.
A billing system only truly gets built once, then possibly maintained in perpetuity. This makes the advantage of building it 20x times faster pointless. AI builds it in a day, will it matter 5 years from now if that billing system was instead built by hand in 20 days a long time ago? No.
The speed advantage of AI only comes into play when you have a lot of code to crank out continuously.
Do you have a need to constantly build bespoke billing systems at a rate of 1 per day? Probably not. So who cares. Take your little AI grift charging $1000/month somewhere else. It’s not needed.
Every billing system in use is constantly maintained with new features, bug fixes ane the like. The system of 20 years ago would apply the wrong tax laws today. the people asking for the new feature today care about how easy those are to add
I think adding new features is exactly the sort of place where AI is terrible, at least after you do it for a while. I think it's going to have a tendency to regenerate the whole function(s), but it's not deterministic. Plus, as others have said, the code isn't clean. So you're going to get accretions of messy code, the actual implementation of which will change around each time it gets generated. Anything not clearly specified is apt to get changed, which will probably cause regressions. I had AI write some graphs in D3.js recently, and as I asked for different things, the colors would change, how (if) the font sizes were specified would change, all kinds of minor things. I didn't care, because I modified the output by hand, and it was short. But this is not the sort of behavior I want my code output to have.
I think after a while the accretions are going to get slow, and probably unmaintainable even for AI. And by that time, the code will be completely unreadable. It will probably make the code written by people who probably should not be developers that I have had to clean up look fairly straightforward in comparison.
The customer cares how much it costs. And how much it costs is proportionate to how much time it takes to build. You’re conveniently ignoring market and price dynamics
Patient: "Doctor, it hurts when I do this."
Doctor: "Then don't do that!"
I'm finding that how you choose to use it makes all the difference in whether it's useful or not. I understand the reticence to jump on the hype train and it's taken some reps to find the parts of building with AI that I don't like and how to navigate it and keep it from making choices I wouldn't make or are low quality.
> asking for a recommended tech stack
this is up to you. you can just tell it what tech stack to use. better yet, bootstrap the project yourself and give it to AI as the starting point. nobody is saying AI has to make these choices for you and you're not allowed anymore.
> I wasn’t happy with some of them because of my own experiences in the past... Even when deciding against something for a reason, Claude Code tried to push me back on the suggested track.
this kind of sounds like many human teammates at work... you don't always like their suggestions or they aren't convinced by your arguments? the difference being with AI you can just tell it what to do, no persuasion required.
nothing about AI prevents you from thinking about design choices, architecture, data modeling, or even the minutiae if you want to. the only thing telling AI to do those things for you is you!
Getting a feeling of "wanting to keep going" with something does not automatically make it an "addiction".
> I don’t want to depend on something doing the work I earn money with.
A tale as old as time, and a valid feeling, though not particularly helpful to dwell on since the technology will never go away and never get worse than it is right now.
> I don’t want to give up my brain and become lazy and not think for myself anymore.
Your brain will think about other important things and you don't need to become "lazy" just because a machine is doing something that used to require more effort on your part.
> I enjoy technical discussions with (human) co-workers.
So what? You can still have those discussion.
> I enjoy reading blog posts and tutorials and learning from other developers.
So what? You can still read those blogs, but the subjects might shift away from coding minutiae to other topics.
> I want to learn and grow and become better at what I am doing by trial and error and mistakes I make all by myself.
Going forward, that trial and error process will start to happen more at the product/project level rather than the source code level.
> I don’t want to be part of a trend/hype destroying our planet even faster than we already do without it.
I'm a little tired of the environmental argument against AI. It feels contrived, like people are fishing for a "problemism" to use to oppose it to avoid harder discussions.
Let's compare AI to one typical 20 mile round trip commute. I asked Gemini and Claude and compared to see if the results looked good, but feel free to check.
One ~20 mile round trip commute: about 5700 Wh in an EV, about 27000 Wh in a gas car (due to thermal efficiency).
Comparing to the EV that's about 1,400 ChatGPT queries, 2,800 AI code completions, and 380 AI image generations.
Ordering lunch on Doordash uses the same power as days and days worth of very heavy AI usage, and that's if the dasher is driving a very efficient car. If they're driving an inefficient gas car it's like weeks of heavy AI usage.
Ultimately what matters is where we get our power. If we are getting it from CO2 emitting sources, what we do with it after that is not relevant. Make AI memes? Order burritos? Boil spaghetti? Who cares. The solution is to replace CO2 emitting sources with cleaner sources.
I also think people are avoiding the big fat elephant: wealth inequality. The whole problem with AI that bothers people is loss of jobs and possible wage suppression. The problem isn't AI, it's inequality and the fact that our system is basically regressive at this point with wealth being actively transferred upward.
But that's a hard complicated discussion and involves confronting powerful forces. It's easier to make stuff up about AI being some uniquely bad energy or water waste when it's not. This is really what "problemism" is all about: using a contrived or exaggerated or mis-attributed problem to avoid a hard or complicated conversation.
Human thinks eating food from fire is bad because it looks charred and you might burn your fingers doing so. /s
I remember the time when people insisted that they would never use a mobile phone. I remember the time when people didnt understand my presentation about the magical "internet" (8th grade school in 94).
I love posts like this. AI is easily the most disruptive thing to hit our industry in over a decade and it feels like one of those "this changes everything" moments. Reading how it's impacting others is cathartic and helps shape my own understanding.
Here are some thoughts I have from reading this article:
> The AI can’t “see” the output, so some responsive refinements were just not correct. Within one CSS rule block there were redundant declarations.
This 1,000%.
Vibe coding has its issue and for me personally, frontend polish, responsiveness, and overall quality is the #1 most glaring of them that simply re-prompting often can't solve.
Even with the ability to screen shot your UI that hasn't solved things like glitchy animations. If you want to do anything even remotely above a junior level like scroll animations, page transitions, etc. good luck. AI will certainly try to do it for you, but inevitably it will not work perfectly and you will need to manually refine or even re-write code. When the code base isn't yours, that makes these re-writes a lot less fun.
> The guilty conscience at the same time, like I was cheating. I realized that when I move on like this, my project will never truly feel like my own.
I've wrestled with this over the last year, and still do to some extent. I'm trying to shift my perspective and envision myself as a brand new developer maybe 16 or 17 years of age. Would I think this isn't my work? I doubt it. I'd probably just (correctly) assume that this is the state of the art, this is how you do it.
Unfortunately this doesn't fix a bigger problem... I just don't enjoy vibe coding as a craft. There's something special about sitting down in the morning with your coffee and taking on a difficult programming problem. You start writing some code, the solutions start to formalize in your mind, there's a strong back-and-forth effect where as you code, the concepts crystalize further... small wins fuel a wonderful dopamine hit experience... intellisense completions, compilation completions, page refreshes, etc. are now all replaced with dull moments often waiting for the agent to return its response, which you now read.
> I’m curious (and a little bit scared) to see where we will go from here. I hope that in the end I can be part of a community that values craftsmanship, individuality and honest, high-quality work.
I really hope so too... But speaking honestly, I think this ship is sailing away quite quickly.
Time is money, and it always has been this way. Very few organizations can afford the luxury of time when building, designing, etc. I see no chance for this genie to go back in the bottle, and I believe it has (and will continue) to fundamentally change the nature of our work.
Over time as these models improve, there's a chance it could dramatically reduce the overall need for developers... It will start with low level teams as we're seeing already, but could expand.
I have been saying this to everyone -- what's your exit strategy?
I'm not saying you need to panic, but you need a plan for what happens if / when salaries tank dramatically. I hate to be "that guy" but in life I've found expecting the worst, isn't always a bad thing. Keep your mood up, prepare for the worst possible outcome, and be pleasantly surprised if that's not what happens.
I'd bet that a lot of those posters would have accurately predicted and called out many of the very real harms that cars have caused our society and shown that many many mistakes we've made could have been avoided.
> I don’t want to feel this kind of “addiction.”
> I don’t want to depend on something doing the work I earn money with.
> I don’t want to give up my brain and become lazy and not think for myself anymore.
There are a lot of good reasons we should be skeptical of AI and not give up on essential skills. But sometimes I want to shake these people by the shoulders. Do you drive an automatic car? Do you use a microwave? Do you buy food from a grocery store? Do you own power tools?
The entire point of civilization and society is that we are all "addicted" to technology and progress. But the invention of the plow did not, in fact, make us lazier or stop using our brain. We just moved on to the next problems. Maybe the Amish are have it right and we should just be happy with a certain level of technology. But none of us have "lost" the ability to go backwards if we really wanted.
You can finally ask a computer to think and solve problems, and it will! People act like this is a brave new world, but this is literally what computers were supposed to be doing for us 50 years ago! If somebody finally came out with a fusion reactor tomorrow I would half expect people to suddenly come out and say "Oh, I don't think I can support this. What about the soul of solar panels? I think cheap electricity is going to make things too easy."
Douglas Adams really put it best:
> “I’ve come up with a set of rules that describe our reactions to technologies,” writes Douglas Adams in The Salmon of Doubt.
> 1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
> 2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
> 3. Anything invented after you’re thirty-five is against the natural order of things.
I chuckled when I read this. Being 55 I tend to think this is true. But I realized when looking back the things I accepted when growing up, even though they were normal, I now notice that they have had a detrimental effect on society.
So, Although age tends to have this effect on how we see the world, and some of it probably not to worry about. I think there is part of this awareness that has some wisdom and is trying to protect our species..
That's probably true to some extent, but I'm not completely on board.
> 1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
Television and calculators were in the world when I was born, but I never viewed them as "natural". TV always seemed to be a way to distract yourself from the world.
> 2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
I was happy to get on board with the WWW, the web browser, and widespread email usage. Those were revolutionary technologies with immense values. On the other hand, I'm still not on board with text messaging, phone scrolling, or social media. If I could, I'd eliminate social media from society.
> 3. Anything invented after you’re thirty-five is against the natural order of things.
I'm over 50 and a strong believer in the value of the LLM. It's a work tool that I can use at work and put away when I'm at home (or not, depending on my mood). It's new and exciting and revolutionary and a move in the right direction for humanity.
you need not stick to any level. Some things that always have been are still bad (slavery is an obvious example now dated enough to be uncontrolversial). Some new things are bad and others good at any age.
don't grow up too set in your ways to not learn the new. But do grow up fast/young to get some cynicism for everything. now that I'm in my 50s the first is important but when younger the later was important.
The problem is that you're likening fundamentally unlike things. AI isn't like a microwave or an automatic car or a power tool. It does not augment you. As I said elsewhere: AI is not a bicycle for the mind, it's an easy chair. You will lose more than you ever gain.
This is purely a matter of perception. Cooking a meal is a deeply intellectual process. If I buy a meal from a restaurant, yes I am losing a skill. But if making a hollandaise is not a skill I ever need in my life, it's not really a practical loss.
AI is taking problems and putting them in a drawer so we never have to think about it again. Matches de-intellectualized making a fire. A washing machine de-intellectualized doing laundry. These are now solved problems.
Our brainpower spent on them is effectively worth nothing. The only reason we need to learn to make a fire from scratch is for the intellectual satisfaction or for emergency situations. The same reason we would choose to work on the problems that AI can now solve.
It only a loss if you think the skill and ability you are losing is intrinsically valuable, and the only thing you are going to replace it with is leisure.
>It only a loss if you think the skill and ability you are losing is intrinsically valuable
What about the skill of learning itself? I would suggest that's one of the most important skills humans have evolved. The more integrated AI becomes in our societies, the more it will automate away potential opportunities for learning. I can forsee a world tightly integrated with AI where people are not only physically sedentary, but mentally as well.
As we progress further into the future, we need more educated people than ever to tackle the exponentially increasing complexities of our society. But AI presents an obstacle that many will never cross due to how to convenient it is to skip the messy work of understanding.
Also, this problem is not unique to AI. It existed before the GPTs and Claude's of the world. But it's a problem of scale, and every company on the Earth right now is trying to scale AI up as fast as possible.
Here's a practical example: I am using AI to help me with my garden. It's been amazing - it helps me identify plants, identify soil issues, what fertilizer to use and what days to apply it, etc.
What exactly did AI take from me? Spending hours of research on Google and Youtube to glean little incomplete bits and pieces? Calling a yard service?
It's also clearly obvious when AI gives bad or incorrect advice - I am still trying different things and watching for the results.
Coding is a outlier example where AI can just do the work semi-competently without anyone checking it. But I think it speaks more to the nature of coding itself - coding is a means to an end and for most people not an actual pursuit in itself.
>What exactly did AI take from me? Spending hours of research on Google and Youtube to glean little incomplete bits and pieces? Calling a yard service?
An opportunity for a deeper understanding of gardening? If you spend hours researching on gardening and come away with an incomplete understanding of what you were attempting to do, I'm not sure that's immediately the fault of the research available. It could be that you just didn't do a good job searching for the necessary information.
In this way, AI can be a boon. It helps figure out what you actually want to know in the moment. But I think it would be a step to far to say that a smattering of specific questions can replace the sturdy foundation povided by a typical education--e.g. through apprenticeship, books, etc.
>It's also clearly obvious when AI gives bad or incorrect advice
Is it? Isn't this a __core__ problem that researchers around the world are trying to solve? Also, __how__ could you make such a statement unless you already possessed the knowledge ahead of time to make such a judgment? I think it's hard to know if something is bad advice by looking at just cause and effect. It could be that you just lack the understanding to put the advice into practice.
> It could be that you just didn't do a good job searching for the necessary information.
How can you? The existing resources are terrible.
> But I think it would be a step to far to say that a smattering of specific questions can replace the sturdy foundation povided by a typical education--e.g. through apprenticeship, books, etc.
I am not going to go through a college program for my own garden. And I have books! But unless you can read a tiny and perform a small research project, you are not going to know how all of the plants in your specific garden in your specific region in your specific weather are going to behave.
The best I could do is hire an expert - but again I am learning less by hiring it out.
> Also, __how__ could you make such a statement unless you already possessed the knowledge ahead of time to make such a judgment?
"Use X to kill the moss". It didn't kill the moss. I will now use AI to find a list of alternative things to try to kill the moss, and learn what works in my garden.
The idea that AI is going to make people stop learning I don't think is born out in practice. It might make some people stop researching as an activity though.
> making a hollandaise is not a skill I ever need in my life
I know you just wanted to poke at the analogy, but if you like hollandaise, it's one of the easiest and most rewarding sauces to make at home! Restaurant hollaindaise is usually terrible
Agreed.
(Though it's not as easy as a béchamel, and yet I still see people buy jarred alfredo sauces. You can literally make an amazing alfredo sauce with pantry ingredients in less time than it takes to boil the noodles! Why would anyone buy an alfredo sauce!?)
Although this more or less is my point. If people are willing to give up these incredibly high reward, low effort skills - how much more uphill is the battle to make people code and process data?
Now you're getting it! The modern way of life which prioritizes convenience and production destroys human connection. Making sauce is pointless; let's go one step further and make every other thing you might do equally pointless. Welcome to the hellscape! It's surprisingly comfortable.
The other extreme is also a hellscape. Work and suffering is the only thing of value. Let's make pyramids to bring people together and show off our collective wealth.
If I'm not mistaken, this was Socrates' exact perspective on writing.
>Socrates' exact perspective on writing
Again, writing replacing memorization is not a good 1:1 comparison to AI replacing technical understanding. Someone still needs to understand what is written and act upon that knowledge. That requires skill and experience in the domain they're working within.
However, a person using an AI does not need to understand the underlying problem to get results. A person can ask Claude Code to write them a web app dashboard without having ever learned JS/CSS/HTML. It does not require them to have skills within a domain.
Also, we need to be honest with ourselves. Human brains did not evolve for the instant gratification of modern technology. We've already seen what technology has done to our attention spans. I am concerned over what further reliance on technology, particularly AI, will do to our brains.
> However, a person using an AI does not need to understand the underlying problem to get results. A person can ask Claude Code to write them a web app dashboard without having ever learned JS/CSS/HTML. It does not require them to have skills within a domain.
This perspective is funny to me because of how much the modern web is already built around web developers refusing to use CSS and PHP. The giving up of the skills happened before the automation.
Dubious. Ai psychosis is the opposite. It’s about being empowered to explore ideas much further but with a maladaptive tool designed to be an appeaser by reinforcement learning.
Im hearing a lot of opinion, but nothing convincing.
All knowledge started as someone's opinion. The goal isn't to avoid opinions, it's to stress-test them. That's exactly what HN is for.
This is an insane claim:
> The entire point of civilization and society is that we are all "addicted" to technology and progress.
Technology is like much of material reality, in that we can think whatever the hell we like about its various forms, especially so if we’re surrounded by it.
It’s not insane. They are correct that is the point of civilization which carries information from generation to generation outside the oral tradition in a systematic organized reliable way.
The point of civilisation, however loose that idea may be, is, if it’s anything at all, determined by people.
Technology exists today in a way that feels like it could be defining its own path in a sense, but much like oral tradition, neither are large enough concepts to describe civilisation.
Technodeterminism is a common feeling in paradigm shifting moments. Don’t forget who is at the helm of the change.
Or for the memetics fans out there, the point of people, if it's anything at all, is determined by civilization..
The line is fine. Even if I use GPS a lot, I still try to keep my ability to interpret maps and find my way.
Same with calculators, even when today are dirt cheap, are not allowed in school, and being able to do math without it is a valuable skill.
So maybe there are like 2 groups of things: one where using it you are losing nothing, some where you lose some valuable ability.
It's quite difficult to tell exactly the extent your life depends on technology.
How different do you think your life would be if the combine harvester did not exist?
Combines have been had modern ai image recognition cameras (same technology as a llm) in the base model for a few years now.
> I want to shake these people by the shoulders... Do you use a microwave?
Microwaves aren't doing active problem solving though. It seems what the author is trying to say is they enjoy problem solving and they find coding a rewarding and creative experience. Sure microwaves saved at-home cooks might enjoy zapping a frozen dinner, but the author is a chef who enjoys writing their own recipes and cooking from scratch. AI isn't just the microwave, it's also the chef.
> None of us have "lost" the ability to go backwards if we really wanted
This absolutely isn't true. Using google maps quickly makes people poorer at navigation - skills need to be practiced. The author thinks letting AI into their kitchen to cook for them will change themself cognitively and make them lazy and lose their skills. And that would be true.
What it sounds like you're getting at but never said is there might be newer skills on the other side that are even more rewarding, which may be true. But if history is any indication, there will be no shortage of folks who like things the old way and want to use their meat brains to provide bespoke goods and services that AI can't.
Agreed, this is the aspect of the AI criticism I find strange too. We should want to be targeted in how we use it, just as how a practical fusion reactor wouldn't replace solar in every situation. Not reject it outright.
We should be using these capabilities to allow ourselves to work on harder problems. In science, there are a lot of tasks that require a low, but non-zero amount of intelligence and aren't really the most interesting part of science. Many of these tasks limit how much work can actually be done. Automate them, and you can dramatically increase your capabilities and focus on the actual science work.
> Do you drive an automatic car? Do you use a microwave? Do you buy food from a grocery store? Do you own power tools?
None of these things allow you to turn your brain off while the machine does the work.
I still have to DRIVE the car and all the thinking that goes with that. It's not a robotaxi.
I still have to acquire and prep the food I am microwaving. It's not a replicator.
I still have to know what I want to eat before grocery shopping and prepare the food. It's not a take out restaurant.
I still have to know how to use the power tools to carefully shape something into a fine piece of furniture and not a pile of splintered firewood. Power tools can't operate on their own unless aliens (see Maximum Overdrive.)
These are better analogies:
Do you take a taxi or public transport? Those let you turn your brain off while someone or something does the driving work.
Do you go to a restaurant where you can pick what you want, turn your brain off and wait for a delicious (or not) meal?
Do you order takeout where you can order what you want form the comfort of your home, turn your brain off and enjoy the meal when it arrives? Then reheat the leftovers in the microwave.
Do you use a fabrication service where you send them a drawing, turn your brain off, and they ship you an assembled thing?
All of your examples involve you sitting and waiting. That doesn't seem like an apt analogy for what AI can do. You don't have to sit there and come up with other things to do while the AI does the work.
When AI works (and technology in general) that's kind of what it's like. You'll never perceive that you are not doing the work anymore because you won't perceive the work.
Excellent point on the automatic car.
I love driving a manual transmission. But I also understood why it was so hard for me to find a new Jeep Wrangler with a manual transmission a few years ago.
The automatic transmission gives us more dexterity for... what exactly? Fiddling with the dash, reaching for something in the back seat, texting? The best case human has much more control but the average case seems worse off.
I think of themselves as very practical - I drive a manual, I fix my own cars, I do my own house projects, I cook my own meals.
Which is part of the reason these anti-AI screeds fall on deaf ears for me. My generation has willingly abandoned all of these legitimately useful hard-skills But there's also nothing preventing you from picking and choosing what you care about.
What's wrong with choosing to care about coding manually then?
I'm not actually against manual coding. I just think people need to be honest about about why it's valuable.
I don't work on my own car because I believe that everyone should fix their own cars. But I think enough people should be knowledgeable and have these skills in society - if for no other reason than to keep mechanics and automakers and dealerships honest. I am not personally upset if you work on your own used car or take it to your dealership.
I am against the idea that everyone should somehow be against AI coding.
I don't want any machine doing my thinking for me. This is why I am in favor of banning traffic lights. Why should I trust a machine to tell me when it's safe to stop and when it's safe to go? Plus, we could employ police officers to stand in the middle of the intersection and direct traffic, thus contributing further to employment.
;)
> Do you drive an automatic car? Do you use a microwave? Do you buy food from a grocery store? Do you own power tools
Which of these is behind a subscription paywall and owned by another party that would cut off your access immediately?
These comparisons make little sense, which is the problem with comparisons. They are soundbites from enthusiasts who don't know or understand how this technology will actually affect or shape us, but feel entitled enough to misinform the rest of us.
> The entire point of civilization and society is that we are all "addicted" to technology and progress.
I'm not addicted in any way to an automatic car. I prefer an automatic car, because it's easier to drive than a manual car. There have been numerous studies already into the problematic nature of AI addiction, and calling it simply "progress" is denuding the experiences of tons of people who have been harmed, up to and including dying, as a result of too much AI use.
> But the invention of the plow did not, in fact, make us lazier or stop using our brain.
No but industrial farming practices are not an unalloyed good either.
> But none of us have "lost" the ability to go backwards if we really wanted.
I mean, we kind of have in a few ways, at least insofar as the AI boom is concerned. I can't have a version of Windows that doesn't have copilot in it. I can't have Microsoft Office without Copilot. I can't have Photoshop without generative AI features. Like, say what you will about the AI doomsayers and yes, even this one I think is overstating it a bit? But the AI push is relentless. It's everywhere, in every product, all the time. Last time I was at Home Depot I saw an AI powered microwave for fucks sake.
And, that's not to say there are no problems at which LLMs are good solutions, but it isn't this many. I use Claude to generate code, usually boiler-plate type stuff or to help me solve problems, and it's legitimately quite good. Conversely, generated images and video have always, always looked like absolute shit to me. Generated music is... okay? But as a consumer I barely have a way to choose a non-AI future if that's what I want.
> You can finally ask a computer to think and solve problems, and it will!
Sometimes. Other times it tries for awhile and gives up. Other times it makes some shit up that would solve your problem, and Omnissiah be with you if you follow those instructions. Other times you argue with it for 10 goddamn minutes because it doesn't comprehend your instructions.
> If somebody finally came out with a fusion reactor tomorrow I would half expect people to suddenly come out and say "Oh, I don't think I can support this. What about the soul of solar panels? I think cheap electricity is going to make things too easy."
That is flatly ridiculous. LLMs do a lot of interesting things, that I will grant, but they are not the problem solver you're pitching them as, and certainly nothing like a Fusion reactor.
[dead]
[dead]
> Do you drive an automatic car? Do you use a microwave? Do you buy food from a grocery store? Do you own power tools?
The answer to these questions could easily be no, and life is way better for it.
Hah, I actually only hit 1/4 for that set of questions. I'd prefer to drive an automatic, though, if money wasn't an issue.
Reposting a comment (and one of the replies) since it's relevant:
---
It occurred to me on my walk today that a program is not the only output of programming. The other, arguably far more important output, is the programmer.
The mental model that you, the programmer, build by writing the program.
And -- here's the million dollar question -- can we get away with removing our hands from the equation? You may know that knowledge lives deeper than "thought-level" -- much of it lives in muscle memory. You can't glance at a paragraph of a textbook, say "yeah that makes sense" and expect to do well on the exam. You need to be able to produce it.
(Many of you will remember the experience of having forgotten a phone number, i.e. not being able to speak or write it, but finding that you are able to punch it into the dialpad, because the muscle memory was still there!)
The recent trend is to increase the output called programs, but decrease the output called programmers. That doesn't exactly bode well.
See also: Preventing the Collapse of Civilization / Jonathan Blow (Thekla, Inc)
https://www.youtube.com/watch?v=ZSRHeXYDLko
---
Munksgaard 1 day ago:
Peter Naur had that realization back in 1985: https://pages.cs.wisc.edu/~remzi/Naur.pdf
For what it’s worth (ie, absolutely nothing), I agree with her 100%. I didn’t get into this field in order to prompt an AI to take care of the details. I got into it because I love the details.
I’m a strong performer on a good team at a company many people would want to work at… and I know the clock is ticking. Sooner or later, I will be too slow.
I’m not going to claim that this is the wrong way to go. It’s obviously the future, and the future doesn’t care what allenrb does or does not want. I’m somewhat hopeful that power and cooling requirements will come down by multiple factors of 10x over time, reducing the environmental damage.
The fact is, I love what I’ve been able to do “the old way” and just don’t feel the urge to move on. So it goes.
Someone the other day was talking about there being two kinds of builders. One likes the details of doing, where the other likes the things they produce.
The idea was that one likes AI and the other naturally hates it.
I thought about that for a bit and decided that, like most things, if you’re any good at something the “hard way” you probably have some of both. Or at least I’m sure it’s true for me.
I LOVE that I can produce the things I want to create without spending months crafting lines of text. The “I know how to architect this, I know what a decent data model looks like, I have a good idea of where someone is likely to introduce security or scaling problems. I can pilot this plane and produce something GOOD.”
But, I really also HATE looking at the final product and forever measuring, in my head, how much of it is even mine. Which parts I haven’t thoroughly reviewed, or would have spent a week learning and didn’t, or maybe wouldn’t have accomplished correctly at all? Am I a fraud, now? I wasn’t before…
It’s a really painful trade for me.
48 years old and I am 100% feeling this.
Yes, I am much more productive having Claude Code bang out boilerplate back-end code, but honestly I always kind of enjoyed doing it. Now I'm just a micro-manager for an AI.
And honestly, how long will that last? Given that LLMs came out of nowhere to radically redefine my role from software engineer to prompt writer in just a couple years, I have every reason to believe that they're coming for my role as prompt engineer next. (As my CEO surely hopes.)
I'm just glad the timing of the great AI replacement began right when I was nearing burnout anyway.
While I applaud her and wish her well — writing like this reminds me of a couple of things.
First my aging father insisting on navigating using his unfortunately fading memory instead of Google maps. Some people just won’t pick up technology out of habit or spite, even if it hinders them.
Second, a quote I read here that I’ll paraphrase “you can be the best marathon runner in the world and still lose a race to a guy on a bike.” Know the race you’re racing. It often changes.
I think it’s valid and commendable to keep the old ways alive, but also potentially dangerous to not realize they’re old ways.
I don't think this diminishes your point, but, for a thing like memory, your father may be maintaining it by insisting on relying on it. It may diminish regardless, but its diminishment may slow down.
At work, we are in a certain kind of race. In life, we are in a certain other kind. To paraphrase a recent Brandon Sanderson talk about creativity in an era where AI can outpace and possibly soon, out-quality a professional, "The work you do on _you_ can be _the art_."
Strongly agree! As someone who has been caring for a parent with dementia, it's definitely a use or lose it kind of situation. See also the studies on long term cognitive health in London cab drivers
https://www.statnews.com/2024/12/16/alzheimers-disease-resea...
I had a significant other 20 years ago that would not use a GPS. This resulted in constant fights whenever she travelled. If she got off her route, I got a phone call. I lacked the skills to divine her exact location and what direction she needs to go based on vague descriptions of being on “some highway” for “some amount of time” and she is near mile marker “I don’t know.” After hanging up on me she would eventually stop somewhere or ask someone or figure something out or maybe never come home.
Then one day, She was on the way to an OB appointment she almost plowed into a car in front of her while she was looking at her Mapquest pages. Risking our unborn child.
Even after pointing out the danger she claimed the guy in front… He did no such thing, I saw everything from my position in the parking lot.
I bought a GPS unit “for me” and put it into my car. I just used it. If we travelled in my car she still insisted on her printed maps. I ignored them. (This was very intense.)
Then one day we took her car for a trip and I brought my GPS. And “forgot it” in her car. I claimed I would remove it “later”.
About two weeks later she gave me the look and said not to laugh. Dead serious. She then said “the GPS is ok “ and can stay in her car.
Hallelujah! The life expectancy of my wife and child just went up exponentially.
This day, I have no idea what her hangup was. The best I could come up with was she was bad with directions. Was probably taught how to read a map. And her father probably instilled her sense of pride for the ability to read a map. And choosing to use a GPS was retroactively wasting her time learning how to use maps. And devaluing a skill she worked hard to learn.
I don’t care. I just wanted my family to live.
> Know the race you’re racing
This is the KEY difference between people who are willing to adopt this technology and those who aren't.
If you are able to view your job as simply a pursuit of a craft, more power to you.
The reality is likely that over time your employer will realize you are slower than every other engineer, and that your enjoyment of the craft is actually just you being an old slow developer.
The "race" here is the race with every other developer out there. They're getting on bikes, and starting to pull away ... what are YOU going to do?
[dead]
Yeah I think this article put a finger on what I was feeling after using Claude Code for the first time to convert an PDF to an Markdown document[0]. I think I will update my article on these thoughts. Thanks for touching on something I had been feeling. It also feel like I was cheating. I also used CC to update the version of my SSG and that was good because I did not want to spend my time dealing with that. But there are certain projects that I can see myself not feeling good about if I used the tool to help me with.
[0]: https://www.scottrlarson.com/publications/publication-my-fir...
I don't have a stake on AI, but more and more I see the following patterns:
- people that give in to AI do so because the technical merits suddenly became too big to ignore (even for seasoned developers that were previously against it) - people who avoid AI center their arguments on principles and personal discomfort
Just from that, you can kind of see where this is going.
Most arguments against it are built on some moral principle and not on objective reality of usefulness.
Crypto used to be the thing to hate but that made sense as the objective usefulness of crypto was meek. AI models were always crazy useful but prohibitively expensive. Youd need an entire team to build your models. Now you dont.
Yeah, things get more and more terrible over time. Your point?
That's true for new projects, I've found that as a bootstrap mechanism it is not ideal, takes away your hard earned preferences/know-how abs leaves you barely engaged on the technical path. On the other side I already have quite big codebases, some legacy and inherited, there exist vast amount of context to work on and figure out stuff/fixes/bugs/improvements and Claude Code excels 95% of the time in figuring the existing flows and functionality, if a vague or precise issue is known that's facing some files elaborations or user work flows it can do a good job in pointing out what are, with a high chance, the issues.
I've found two personal use cases for LLM generated code:
(1) I have an idea for some app, but either I feel it won't be useful enough/save me enough time to justify developing it, or I simply don't feel the problem is interesting enough to be motivated by it. In that case, a vibe coded tool is perfect. It generally does one simple thing, and I don't care about long term maintenance, because it just needs to keep doing that thing.
(2) Adding a feature to an open source project. Again, it's a case of "I want this feature, but am not willing to spend the time needed to implement it." Even a relatively simple open source project can take a day or two just to get a basic understanding of the code and where I need to make the changes. Now I can often just get a functioning vibe-coded implementation within a few hours.
(2) leaves me with some unsettling feelings about how this will affect the future of open source software. Some of the features I've implemented this way may very well be useful to other users, but I can't in good conscience just dump a vibe coded pull request on a project and except them to do the work of vetting it. But if I didn't have the energy to implement the change myself, I'm definitely not going to bother doing the work of going through all the LLM generated code, cleaning it up to the standards of the project, etc. Whereas before I didn't have a choice, and the idea of getting the change ready for a PR was much less daunting since I understood the problem space and solution well.
So at least for myself, I can see a future where many of the apps I use are bespoke forks of popular applications. Extrapolate that to many, many people and an interesting landscape emerges.
Some of us have been waiting our whole lives for a comprehensive DWIM command.
> DWIM is an embodiment of the idea that the user is interacting with an agent who attempts to interpret the user's request from contextual information. Since we want the user to feel that he is conversing with the system, he should not be stopped and forced to correct himself or give additional information in situations where the correction or information is obvious. [0]
— Teitelman and his Xerox PARC colleague Larry Masinter, Xerox PARC, in 1981
[0] https://en.wikipedia.org/wiki/DWIM
Why can’t people just acknowledge AI is good at some things and bad at others. Why does every post say AI is either groundbreaking or terrible. Get a grip people. It’s a tool.
Nuanced opinions don't get clicks and don't fit in tweets. Algorithms hate context.
We've created a communications system bottlenecked by virality and short form text and video in which all nuance and context is stripped from everything.
This, far far more than anything AI is doing, is what's making us dumber.
I am the opposite. I do not understand how people do not get at least 5x more productive with GenAI.
Maybe it is because I do not do much front-end design. Maybe it is because I'm a bit more diligent than your average "viber", or maybe because for me it is easier to spot a suboptimal solution, or challange with edge cases from experience etc.
But these people turning their backs, not in principle, because that I fully understand, but because of underperformance?
Maybe their expectations are way out there? Maybe (most likely) it is the application domain? Maybe plainly a skill issue?
But seeing how GenAI is plowing through through fields, I would not turn my back on it even if it wasn't there (yet) in my domain.
Here’s my experience: just yesterday I had to tackle this task that’d have required a backend engineer and a frontend engineer several days, so I tasked several Claude code agents to work on them autonomously. With the time freed up, I didn’t just twiddle my thumbs. I used it to read up on this topic that was making the rounds yesterday and gained a better understanding of it - something hard to do when you juggle both a job and raising a family. I could then reinvest the time I used to learn something by using them in some other projects.
Just my two cents. No matter whether you use AI or not, I’m sure you’ll gain something.
> When you know CSS well (and I would consider myself as someone who does), you quickly find weird and broken things in the generated code.
You have encountered https://en.wiktionary.org/wiki/Gell-Mann_Amnesia_effect
I know I read several posts like these every day, but I've been thinking more and more that I should stick to the chat window and having AI guide me instead of doing the work for me. Great tool for showing me what to do when I don't know and offering me guidance, and rubber ducking of course, but I definitely lose the context and understanding when I just let it rip and write everything for me.
> I've been thinking more and more that I should stick to the chat window and having AI guide me instead of doing the work for me.
That's how I have been using AI for years. I feel like my productivity has skyrocketed over the past year or two, and all my code is still written by hand. It's like having StackOverflow on demand. I also never really have to worry about tokens or usage limits. I don't think I have ever hit the limit on the $20 Claude plan, and I use Claude every day.
Who else struggles with both sides of this? My engineer side values curiosity, brain power, and artistanship. My capitalist side says it's always the product not the process. My formula is something like this: product = money, process = happiness, money != happiness, no money = unhappiness.
I think the optimal solution is min/maxing this thing. Find the AI process that minimizes unhappiness, and maximizes money.
> My capitalist side says it's always the product not the process.
Your capitalist side needs to read some Deming. "Your system is perfectly tuned to produce the results that you are getting." Obviously, then, if you want better results, you need to improve your system.
Also "the product" is ambiguous. Is it the overall product, like how the product sits in the market, how the user interacts with it to achieve their goals, the manufacturability of the product, etc.? That is Steve Jobs sort of focus on the product, and it is really more of a system (how does the product relate to its user, environment, etc). However, AI doesn't produce that product, nor does any individual engineer. If "the product" means "the result of a task", you don't want to optimize that. That's how you get Microsoft and enterprise products. Nothing works well together, and using it is like cutting a steak with a spoon, but it has a truckload of features.
I definitely struggle with both sides, or maybe multiple sides. On the one hand most of my daily output at my job is coming from AI these days. On the other hand I find the explosion of AI-generated "writing" (and other forms of art) to be aesthetically abhorrent. And I've just recently started a ... weird sort of metaphysics / spirituality / but also AI related writing project, so the difference between creation with and without AI is in really sharp focus for me right now.
I wrote an article about this, but honestly I don't think I really captured the totality of my feelings. I really haven't decided where I land. I'm definitely using the tools for economic purposes, and I even have some "pure-fun" side project stuff where I'm getting value from it.
Here's the article if that sounds interesting, would love to discuss the whole topic with anyone who's finding themselves of two (or more) minds on these sorts of issues: https://hermeticwoodsman.substack.com/p/why-i-let-ai-write-m...
What did you first order at the bar? Did it burn? Did you become a teetotaler?
Latecomers lack the hundreds of iterations and the experience that comes with it. The senses haven't been trained.
There's a business here. Not one I want to be in, or one without major ethical drag, but a viable one. Fend off extinction, get fit.
[dead]
How is there such a wide gap between developers? Either running ten agents pushing thousands of LOC a day or afraid to paste a code snippet from ChatGPT.
"I'm not going to use this technology that obviously enhances my productivity because <insert emotional subjective reasoning that no customer would ever care about here>."
I think a lot of people have forgotten why we actually get paid to write code. The person who wants an automated billing system doesn't care if you hand-typed it or not, or if the CSS that would have taken 2 hours to write took 8 seconds via an AI plus 60 seconds of you tweaking a border you didn't like. They just want their billing system. And if you are the person that takes 20x longer to build it, you're going to quickly get outcompeted. Sorry.
Give me Understanding or give me death.
Customer doesn’t give a fuck how long a billing system took to make, they only care that it works correctly.
A billing system only truly gets built once, then possibly maintained in perpetuity. This makes the advantage of building it 20x times faster pointless. AI builds it in a day, will it matter 5 years from now if that billing system was instead built by hand in 20 days a long time ago? No.
The speed advantage of AI only comes into play when you have a lot of code to crank out continuously.
Do you have a need to constantly build bespoke billing systems at a rate of 1 per day? Probably not. So who cares. Take your little AI grift charging $1000/month somewhere else. It’s not needed.
Every billing system in use is constantly maintained with new features, bug fixes ane the like. The system of 20 years ago would apply the wrong tax laws today. the people asking for the new feature today care about how easy those are to add
I think adding new features is exactly the sort of place where AI is terrible, at least after you do it for a while. I think it's going to have a tendency to regenerate the whole function(s), but it's not deterministic. Plus, as others have said, the code isn't clean. So you're going to get accretions of messy code, the actual implementation of which will change around each time it gets generated. Anything not clearly specified is apt to get changed, which will probably cause regressions. I had AI write some graphs in D3.js recently, and as I asked for different things, the colors would change, how (if) the font sizes were specified would change, all kinds of minor things. I didn't care, because I modified the output by hand, and it was short. But this is not the sort of behavior I want my code output to have.
I think after a while the accretions are going to get slow, and probably unmaintainable even for AI. And by that time, the code will be completely unreadable. It will probably make the code written by people who probably should not be developers that I have had to clean up look fairly straightforward in comparison.
That is why I understand everything before I commit. Ai can write a lot of bad code - but an expert can guide it to good code.
The customer cares how much it costs. And how much it costs is proportionate to how much time it takes to build. You’re conveniently ignoring market and price dynamics
The idea of getting information from other engineers relies on the idea that the other engineers aren't already complacent with AI.
Pretty insubstantial high level tour of broad AI pushback. Goes from "It was just not 'elegant.'" [sic] to "I don't want to give up my brain" [sic].
Exactly, the article is silly, the perspective, idiotic. She says it's, "addictive" like driving a car is addictive compared to walking???
Patient: "Doctor, it hurts when I do this." Doctor: "Then don't do that!"
I'm finding that how you choose to use it makes all the difference in whether it's useful or not. I understand the reticence to jump on the hype train and it's taken some reps to find the parts of building with AI that I don't like and how to navigate it and keep it from making choices I wouldn't make or are low quality.
> asking for a recommended tech stack
this is up to you. you can just tell it what tech stack to use. better yet, bootstrap the project yourself and give it to AI as the starting point. nobody is saying AI has to make these choices for you and you're not allowed anymore.
> I wasn’t happy with some of them because of my own experiences in the past... Even when deciding against something for a reason, Claude Code tried to push me back on the suggested track.
this kind of sounds like many human teammates at work... you don't always like their suggestions or they aren't convinced by your arguments? the difference being with AI you can just tell it what to do, no persuasion required.
nothing about AI prevents you from thinking about design choices, architecture, data modeling, or even the minutiae if you want to. the only thing telling AI to do those things for you is you!
Any sufficiently advanced technology is indistinguishable from magic. -Arthur C Clarke
> I don’t want to feel this kind of “addiction.”
Getting a feeling of "wanting to keep going" with something does not automatically make it an "addiction".
> I don’t want to depend on something doing the work I earn money with.
A tale as old as time, and a valid feeling, though not particularly helpful to dwell on since the technology will never go away and never get worse than it is right now.
> I don’t want to give up my brain and become lazy and not think for myself anymore.
Your brain will think about other important things and you don't need to become "lazy" just because a machine is doing something that used to require more effort on your part.
> I enjoy technical discussions with (human) co-workers.
So what? You can still have those discussion.
> I enjoy reading blog posts and tutorials and learning from other developers.
So what? You can still read those blogs, but the subjects might shift away from coding minutiae to other topics.
> I want to learn and grow and become better at what I am doing by trial and error and mistakes I make all by myself.
Going forward, that trial and error process will start to happen more at the product/project level rather than the source code level.
> I don’t want to be part of a trend/hype destroying our planet even faster than we already do without it.
It isn't. https://blog.andymasley.com/p/the-ai-water-issue-is-fake
A lot of this is just wrong. AI can now see the output.
It's interesting that highly flawed opinion pieces like this are so popular.
I'm a little tired of the environmental argument against AI. It feels contrived, like people are fishing for a "problemism" to use to oppose it to avoid harder discussions.
Let's compare AI to one typical 20 mile round trip commute. I asked Gemini and Claude and compared to see if the results looked good, but feel free to check.
One ~20 mile round trip commute: about 5700 Wh in an EV, about 27000 Wh in a gas car (due to thermal efficiency).
Comparing to the EV that's about 1,400 ChatGPT queries, 2,800 AI code completions, and 380 AI image generations.
Ordering lunch on Doordash uses the same power as days and days worth of very heavy AI usage, and that's if the dasher is driving a very efficient car. If they're driving an inefficient gas car it's like weeks of heavy AI usage.
Ultimately what matters is where we get our power. If we are getting it from CO2 emitting sources, what we do with it after that is not relevant. Make AI memes? Order burritos? Boil spaghetti? Who cares. The solution is to replace CO2 emitting sources with cleaner sources.
I also think people are avoiding the big fat elephant: wealth inequality. The whole problem with AI that bothers people is loss of jobs and possible wage suppression. The problem isn't AI, it's inequality and the fact that our system is basically regressive at this point with wealth being actively transferred upward.
But that's a hard complicated discussion and involves confronting powerful forces. It's easier to make stuff up about AI being some uniquely bad energy or water waste when it's not. This is really what "problemism" is all about: using a contrived or exaggerated or mis-attributed problem to avoid a hard or complicated conversation.
to be clear, the work you’re doing is only relevant to the extent it produces product
nobody cares if you use your brain or not. they do care if you’re efficiently delivering reliable product
I care if I'm using my brain or not. I don't care that nobody else cares.
[dead]
Human thinks eating food from fire is bad because it looks charred and you might burn your fingers doing so. /s
I remember the time when people insisted that they would never use a mobile phone. I remember the time when people didnt understand my presentation about the magical "internet" (8th grade school in 94).
[flagged]
I love posts like this. AI is easily the most disruptive thing to hit our industry in over a decade and it feels like one of those "this changes everything" moments. Reading how it's impacting others is cathartic and helps shape my own understanding.
Here are some thoughts I have from reading this article:
> The AI can’t “see” the output, so some responsive refinements were just not correct. Within one CSS rule block there were redundant declarations.
This 1,000%.
Vibe coding has its issue and for me personally, frontend polish, responsiveness, and overall quality is the #1 most glaring of them that simply re-prompting often can't solve.
Even with the ability to screen shot your UI that hasn't solved things like glitchy animations. If you want to do anything even remotely above a junior level like scroll animations, page transitions, etc. good luck. AI will certainly try to do it for you, but inevitably it will not work perfectly and you will need to manually refine or even re-write code. When the code base isn't yours, that makes these re-writes a lot less fun.
> The guilty conscience at the same time, like I was cheating. I realized that when I move on like this, my project will never truly feel like my own.
I've wrestled with this over the last year, and still do to some extent. I'm trying to shift my perspective and envision myself as a brand new developer maybe 16 or 17 years of age. Would I think this isn't my work? I doubt it. I'd probably just (correctly) assume that this is the state of the art, this is how you do it.
Unfortunately this doesn't fix a bigger problem... I just don't enjoy vibe coding as a craft. There's something special about sitting down in the morning with your coffee and taking on a difficult programming problem. You start writing some code, the solutions start to formalize in your mind, there's a strong back-and-forth effect where as you code, the concepts crystalize further... small wins fuel a wonderful dopamine hit experience... intellisense completions, compilation completions, page refreshes, etc. are now all replaced with dull moments often waiting for the agent to return its response, which you now read.
> I’m curious (and a little bit scared) to see where we will go from here. I hope that in the end I can be part of a community that values craftsmanship, individuality and honest, high-quality work.
I really hope so too... But speaking honestly, I think this ship is sailing away quite quickly.
Time is money, and it always has been this way. Very few organizations can afford the luxury of time when building, designing, etc. I see no chance for this genie to go back in the bottle, and I believe it has (and will continue) to fundamentally change the nature of our work.
Over time as these models improve, there's a chance it could dramatically reduce the overall need for developers... It will start with low level teams as we're seeing already, but could expand.
I have been saying this to everyone -- what's your exit strategy?
I'm not saying you need to panic, but you need a plan for what happens if / when salaries tank dramatically. I hate to be "that guy" but in life I've found expecting the worst, isn't always a bad thing. Keep your mood up, prepare for the worst possible outcome, and be pleasantly surprised if that's not what happens.
I would pay money to read rants on a forum like Hackernews but in 190X when cars started sharing the roads with horse carriages.
I bet it would look something like the posts we are seeing today with developers and agentic AI.
I'd bet that a lot of those posters would have accurately predicted and called out many of the very real harms that cars have caused our society and shown that many many mistakes we've made could have been avoided.
It’s called a library. It’s free. Plus you get to say microfiche which is fun. :)