I feel like the OP was so close to the epiphany that meaning and utility are orthogonal values, but then they retreated back to the safe ground of avoid-efficiency-if-you-want-meaning.
They talked about it with knitting: obviously, buying yarn is a time-saving efficiency compared to spinning your own, but that didn't destroy the meaning of the gift. Nevertheless, if I spent time spinning my own yarn and gifting it to my mother, I bet she would find it meaningful.
Similarly, they talk about how they could use a mechanical knitter to save time, but that would reduce the meaning. I think that depends. If I designed a beautiful pattern and then used a mechanical knitter to create a blanket, I bet my mom would find it meaningful.
The epiphany is that meaning doesn't come from how hard I work. It comes from personal relationship and how my actions interact with that relationship. If I knew my mom was looking for a specific, somewhat rare purse, and one day I happened to find it at a yard sale and give it to her, that would be a meaningful gift, regardless of how much effort it took me.
Meaning is not an intrinsic quality of objects. It is not an arrangement of atoms or a quantum field. You can't see meaning in a microscope. Meaning is only in our heads. And because it is only in our heads, AI can't destroy it.
AI is just a tool, like a mechanical knitter or a jackhammer. Sure, maybe not all tools are equally useful for creating meaning, but no one ever worried that meaning was under threat because jackhammers got invented. AI is no different.
It’s interesting—playing devil’s advocate on the purse example, I’m not sure I’d actually call that meaningful. It’s certainly a kind and thoughtful gift, but meaningful?
What if meaning isn’t only about the relationship or the recipient’s satisfaction, but also about what the gift costs the giver in terms of personal value? In the yard-sale case, the purse may be rare to the mom, but it’s essentially cheap and disposable to the giver. Nothing important was surrendered.
By contrast, giving something you personally prize—or investing yourself in a way that reflects what you value—seems to carry a different kind of meaning. I’m not claiming effort alone creates meaning, but that the giver’s valuation of what’s given might be a missing dimension in this framework
The purse can be meaningful because of the implication that you remembered a small detail for an extended period of time. “Aww you remembered”. The AI equivalent would be to ask your personal assistant to find the purse for you and you keep it hidden for an extended period to pretend you remembered.
In both cases your mom will be touched by the gesture.
I'm literally considering a career switch from software engineering to electrical engineering and electronics, and naturally going back to school, because the AI and the way it's used in writing software has sucked out all the meaning in it for me.
That's me! I found I was treated much better in software, the timelines were much more reasonable and your input was valued. When doing firmware, typically everyone's timelines would slip but the delivery date wouldn't which meant your time with a fully functional device (if you even get one at all) would reduce. Half the time you're developing on a half-broken version of the real device
I mean, it's not like I'm going to forget how to write code any time soon. I'm currently recovering from the n-th tough burnout, and I feel like I need a shift toward something new and meaningful. When it comes to job opportunities, from my perspective, knowing the entire stack (not in the webdev sense, but rather both hardware and software) makes you highly attractive on the market, in both worlds. So, while I genuinely can't predict what I'll end up doing 5 years from now, I do feel like it's time to familiarize myself with the other side of the entire stack.
That's one possible way to look at it. The other, perhaps more positive way to look at it, is that similar to autocomplete, AI-assisted tools have made the boring parts less boring and left more space for the interesting bits. I use them every now and then for chores and such which I would put off otherwise, but there is certainly no shortage of interesting problems that they can't tackle. Now I just have more time to focus on those.
I'll admit that I'd used github copilot while I worked on one of my projects, and I couldn't help it but notice a rather significant cognitive decline whenever I set out to take over and start hammering out the code myself. I just can't allow cognitive declines.
I'm in the same boat, I'm hoping firmware / embedded might be better in this regard due to the inherit constraints. If not then EE is probably the only other option. Anyone else have thoughts on this? I'm craving a more civil engineering approach to rigor rather than the mess of modern software. Perhaps that means software just isn't for me.
I think the last thing this world needs is programmers bringing their particular style of “””engineering””” to important things like bridges. Can’t wait to hear about the 5-9s of uptime on the Golden Gate.
Because programmers probably think it'll be a similar field, but it's different. It has correct and incorrect ways of doing things, strongly enforced. You're not inventing new shit, you're reapplying old shit constantly. Old shit that works.
Many think writing software is engineering, but it couldn't be further from the truth.
edit: to clarify, trying to get people to realize the grass isn't always greener, and both sides are better off for it.
I'm definitely wanting to do something with more of a civil engineering approach to rigour. More and more I think software is full of children who don't care and don't know the meaning of responsibility.
I doubt it, I feel like it might improve shops that already care and are already creating with rigor. I don't think it'll raise the bar for the avg shop. However, perhaps that's just be being cynical. By real time embedded is the same do you mean the same in the sense that they are just as poor in quality?
> [...] the same in the sense that they are just as poor in quality?
I mean some real-time software for critical embedded systems has an incredible level of rigor, making heavy use of static analysis, model checking, and theorem proving.
Yeah I'm the same, I enjoy thinking of a system of things, doing it, not typing a couple of commands and a bunch of code is generated. It's not the rote process but feeling like I worked for it/doing it. Similar argument is "why buy milk" you can get a cow, milk it yourself, kind of thing. Which I see that, some people don't care what the code looks like, does it work.
Tangent: reality perception is funny, like why hike, you could just watch someone else hike... I think jewelry is worthless (waste of money to buy) but others don't that kind of thing.
Generative AI didn't need to come into existence in order for the proliferation of utterly mind-numbing devops slop that already dominated modern development to suck all the fun out of computers that were originally intended to solve problems for humans.
That's the difference between the original "free software" community and the "open source" one to me, including its modern incarnation on GitHub: the former produces a collaborative effort to develop a UNIX replacement/alternative and a thriving ecosystem around free software desktop environments with apps intended for humans to do their personal computing. The latter gets you Apache Kafka.
Yes! I have struggled with this concern for a while now.
I think that humans are the ultimate arbiters of quality for humans.
Nothing that is non-human can make that determination because anything that is not human (an LLM) is at best just a really good model. Since all models are necessarily wrong by definition, they can never get it right all the time. When determining what’s good for humans, it is only us that can figure that out.
I also came back to ZAMM when wrestling this question. There must be something there if we have all independently coming to similar conclusions.
The screenplay part made me think. While it's true that an LLM could potentially generate a better^ script, while writing scriot the author would probably had had many ideas that did not make it to the final draft. Yet those ideas would definitely influence the final product, the movie. There's probably only so much you can put in the script really.
Meaning for the brother is one thing, but as a potential watcher, I would almost always prefer a movie that someone really cared about^^.
^: depending on the definition of "better"
^^: as a fallible human being I am not perfect at detecting that care, but there had definitely been cases in my life when someone was talking about a thing that I would not really care about otherwise, but their passion made the talk extremely interesting and memorable
Whether Nystrom realizes it or not, and I think he does, this piece is shot through with Marxist thinking on use value, exchange value, socially necessary labour time, and the general structure of capital social relations.
Capitalism has been a fantastically productive system that has also produced a great deal of labour alienation. Nystrom has a deep need to labour for those he cares about, he needs to make the scarf, slowly and badly, for his grandmother.
But the socially necessary amount of labour to make a scarf is now extremely small, and so Nystrom labours in software to earn a higher wage.
The wage doesn't fulfil him so much, because it's for labour power directed for the purpose of value valorization (aka. profit), not to help those he cares about.
He's skilled and lucky, so he has plenty of excess after labouring to poorly make a scarf. But if he does not already have plenty of capital, he has to work, and his capital has to be put to work too, on things other than badly making scarves, lest it too whither away.
I don't really think it's the effort, to be honest. A short while ago, I made a Custom GPT for my wife[0] that she thoroughly enjoyed and uses all the time. It didn't take me that long. The value in the thing is that I could see what she wanted and make it like she wanted. And on the receiving side, a friend of ours knitted our daughter, Astra, a quilt with her name on it and with this lovely star motif. It must have taken her and her mother, a seamstress, ages. But if she had done it instantaneously, I think I still would have loved it.
As for the other side of things, there is one thing that gives me a mild twinge of envy: I grew up on the command-line and when I write on it, I can knock out a full bash pipeline literally at the command-line super-fast. Many of my friends are far better engineers, but this one thing makes me great at any sort of debugging and all that. Now everyone has that! I'm USELESS.
Well, not really, but it's funny that this unique skill is meaningless. Overall, I've found that AI stuff has let me do more and more things. Instead of thinking, at the end of the week, "Oh man, I wish I'd made progress on my side project" I think "Damn, that wasn't such a good idea after all" which is honestly far more satisfying!
The author attributes meaning for the giver and hopefully receiver to time spent by the giver. They argue less time spent for same utility lowers the meaning.
I see something very different:
1. The government post shared as an example of efficiency to utility increase has glaring errors: “journey-level developers”. You will never achieve any improvement on government code bases if the people leading the effort can’t pay attention to the most basic and broadcasted elements of the job. AI used by junior developers will only compound the massive complexity of government systems to the point where they are not fixable by seniors and they are not usable by humans.
2. The time spent doing something, meaningful or not, with care is a training for that person into attention to detail, which is absolutely critical to getting things right. People who lazily lean on generating more without attention to detail, don’t see the real point of the work - it’s not to add more stuff to less space (physical or mental) faster. It’s to make the existing space better and bigger by needing less in it. The rest is just mental and physical complexity overload. We are about to drawn in that overload like hoarders next to a dumpster.
3. If you ever live in a small home you may noticed that the joy of getting things (usually derived from dopamine-seeking behaviors like shopping or making, or shopping for ingredients so you can make things, or from getting toys for your kids or other people getting toys for your kids) will quickly overload any space in your living quarters. Your house becomes unbearably small and it becomes impossible to find things in piles or drawers filled with other things if nobody ever organizes them. We all have become dopamine adduces or the world has turned us into such and there are few if any humans willing and capable of organizing that world. So most people today will be paralyzed with a million choices that were never organized or pruned down by the owner or their predecessors. The overwhelming feeling would be to escape the dread of organization into more dopamine- generating behaviors. We need more of the “moms” who clean up our rooms after we’ve had fun with all the legos we can now generate. Or we will all be living in a dumpster before too long.
Journey-level would just be 'senior', no? I'm not familiar with the terminology but it seems akin to journeyman (i.e. competent executor of a skillset). A junior would be apprentice in that universe, I imagine.
Let's cut through some of the noise here. We only talk about these things behause we fear becoming obsolete and losing our livelihoods.
Using the scarf example - nobody's paying you to knit it. Knitting it probably won't put food on your table. Maybe for tribal humans that kind of thing did, hence the psychological reward wiring, but not now.
Here's a reality: we do all become obsolete. It's called aging. I don't know how exactly to fit that into that puzzle, but my brain told me it's a relevant reference point, somehow.
A hiring company just wants you to make the thing with an optimal balance between quality and efficiency.
As I see it, one has a few options. A common one is just hoping everything will be okay. Often that turns out to be the case.
Another one is to proactively 'adapt or die'. Master the new way of making the thing even if it tastes bitter. Harder to do with age, but in some sense the obvious choice if you want to be competitive.
Speaking of, I think we often forget that - the world is one big competition for resources and survival. Happiness is a luxury, but we've converted it into a requirement.
I'm not saying I like any of this - I want to be secure for life following the same old patterns I've already learned as well.
But I consider that thought a comforting fantasy to be eyed with suspicion.
That said, thankfully, AI is still pretty good at enshittifying things, so I have a suspicion that one may not need to always adapt that enormously. I work with a lot of legacy software, and have seen that a lot of companies are full of old school tech debt for which hands on programming is still a must have.
Claude code or what have you is a little limited when you're writing code for obscure software packages which nobody knows the name of anymore, but on which some companies still depend for their core business logic.
Will it all dry up eventually? Sure. But slower than we expect methinks.
I love Nystrom's writing, and he's so good at it because he's written so much. A huge part of the value of things is how we grow in the making of them, and I worry that in a world where we accept generative slop, we'll never have the opportunity to woodshed enough to become excellent at a craft.
I'm a good engineer because I've written tons of code, I've taken no shortcuts, and I've focused on improving over my many iterations. This has enabled me to be an effective steward of generative coding (etc) models, but will younger engineers ever get the reps necessary to get where I am? Are there other ways to get this knowledge and taste? Does anyone know or care?
We're in the anthropocene now, and while probably everyone who knows what that is understands we have the largest effect on the Earth, it also means we now also have the largest effect on ourselves. We're so, so bad at taking this seriously. We can unleash technology that idiocracies western civilization inside of a generation, I know this because we keep lunging towards it with ever increasing success. But we can't just shamble around and let Darwin awards sort things out. We have nukes and virology labs, not to mention a climate change crisis to deal with. If the US political system falls apart because Americans under 65 spend between 2-3 hours on social media a day, that's a failed state with a lot of firepower to shoot around haphazardly.
And why do we keep building things that enfeeble us? Did we need easier access to delivery food and car rides, or did we need easier access to nutritious food and more walkable neighborhoods? Did we need social media with effectively no protections against propaganda/misinformation? We know that cognitive ability and executive function decline with LLM use. Can it really be that we think we're actually too smart and we need to turn it down a notch?
There are actual problems to solve, and important software to write. Neither algorithmic feeds nor advertising platforms fall under those categories. LLMs are supposed to solve the problem of "not enough software"--Nystrom points at this explicitly with the Washington Department of Ecology ad. But we never had a "not enough software problem", rather we had a "not enough beneficial software" problem, i.e. we'd be in a way better place if our best minds weren't working on getting more eyeballs on more ads or getting the AI to stop undressing kids.
Generative AI isn't empowering us. We don't have people building their own OSes, (real, working) browsers, word processors and spreadsheet programs, their own DAWs or guitar amp modelers, their own Illustrators or Figmas. Instead you have companies squeezing their workers and contractors, while their products enshittify. You can't even run these things without some megacorp's say so, and how are you gonna buy time on the H100 farm when AI took your job?
I'm too tired to write a conclusion. I'm pretty sure we're fucked. But hey look, the cars drive themselves.
> I've written tons of code, I've taken no shortcuts, and I've focused on improving over my many iterations. This has enabled me to be an effective steward of generative coding (etc) models, but will younger engineers ever get the reps necessary to get where I am? Are there other ways to get this knowledge and taste?
I think this will be a problem in the middle term, and I've written about such deskilling before [0]. With the latest crop of foundational coding models and harnesses, and more progress on the way, I'm beginning to wonder if it will matter? If there's a future where agents are designing the code, implementing the code, and reading and reviewing the code... At that point the code is no longer the thing. "Software engineers" will continue to sit at the interface of product and software, but the software will be writing itself. Of course there will be a need for programmers who can actually read and write computer code, the same way there's a need for Fortran and compiler devs today.
The skill that all software engineers will need to learn, regardless of level, is how to leverage commoditized reasoning to build products effectively.
- how to design systems declaratively and in terms of requirements and constraints
- how to configure the systems in such a way that they're automatically testable end-to-end
- how to move tacit knowledge out of people's heads and into the context (all of our meetings will be transcribed; questions from the agent will be generated during the meeting resolve ambiguity; the agent will be an omnipresent attendee in all meetings: "Agent: The topic you're discussing overlaps with what Sally said three days ago when she met with Mike. They covered xyz..."; companies that follow remote work best practices will have an advantage here)
- how to allocate and orchestrate teams of people and agents
> And why do we keep building things that enfeeble us?
To offer a view which is, well, a different flavor of pessimism: The good news is that we aren't enfeebling ourselves, the bad news is many humans are being enfeebled by a smaller group of humans as a form of economic predation.
I feel like the OP was so close to the epiphany that meaning and utility are orthogonal values, but then they retreated back to the safe ground of avoid-efficiency-if-you-want-meaning.
They talked about it with knitting: obviously, buying yarn is a time-saving efficiency compared to spinning your own, but that didn't destroy the meaning of the gift. Nevertheless, if I spent time spinning my own yarn and gifting it to my mother, I bet she would find it meaningful.
Similarly, they talk about how they could use a mechanical knitter to save time, but that would reduce the meaning. I think that depends. If I designed a beautiful pattern and then used a mechanical knitter to create a blanket, I bet my mom would find it meaningful.
The epiphany is that meaning doesn't come from how hard I work. It comes from personal relationship and how my actions interact with that relationship. If I knew my mom was looking for a specific, somewhat rare purse, and one day I happened to find it at a yard sale and give it to her, that would be a meaningful gift, regardless of how much effort it took me.
Meaning is not an intrinsic quality of objects. It is not an arrangement of atoms or a quantum field. You can't see meaning in a microscope. Meaning is only in our heads. And because it is only in our heads, AI can't destroy it.
AI is just a tool, like a mechanical knitter or a jackhammer. Sure, maybe not all tools are equally useful for creating meaning, but no one ever worried that meaning was under threat because jackhammers got invented. AI is no different.
It’s interesting—playing devil’s advocate on the purse example, I’m not sure I’d actually call that meaningful. It’s certainly a kind and thoughtful gift, but meaningful?
What if meaning isn’t only about the relationship or the recipient’s satisfaction, but also about what the gift costs the giver in terms of personal value? In the yard-sale case, the purse may be rare to the mom, but it’s essentially cheap and disposable to the giver. Nothing important was surrendered.
By contrast, giving something you personally prize—or investing yourself in a way that reflects what you value—seems to carry a different kind of meaning. I’m not claiming effort alone creates meaning, but that the giver’s valuation of what’s given might be a missing dimension in this framework
Value is up to the receiver, not the giver.
The purse can be meaningful because of the implication that you remembered a small detail for an extended period of time. “Aww you remembered”. The AI equivalent would be to ask your personal assistant to find the purse for you and you keep it hidden for an extended period to pretend you remembered.
In both cases your mom will be touched by the gesture.
I'm literally considering a career switch from software engineering to electrical engineering and electronics, and naturally going back to school, because the AI and the way it's used in writing software has sucked out all the meaning in it for me.
Absolutely the same boat here as well sadly. As I sit here reviewing another PR with
dotted everywhere, I am very much over it.Meaningful or not, that's a lot of work and money for a pay cut, fewer options, and worse job prospects [0].
[0] Most of the new EE Grads I see go into software engineering.
That's me! I found I was treated much better in software, the timelines were much more reasonable and your input was valued. When doing firmware, typically everyone's timelines would slip but the delivery date wouldn't which meant your time with a fully functional device (if you even get one at all) would reduce. Half the time you're developing on a half-broken version of the real device
I mean, it's not like I'm going to forget how to write code any time soon. I'm currently recovering from the n-th tough burnout, and I feel like I need a shift toward something new and meaningful. When it comes to job opportunities, from my perspective, knowing the entire stack (not in the webdev sense, but rather both hardware and software) makes you highly attractive on the market, in both worlds. So, while I genuinely can't predict what I'll end up doing 5 years from now, I do feel like it's time to familiarize myself with the other side of the entire stack.
That's one possible way to look at it. The other, perhaps more positive way to look at it, is that similar to autocomplete, AI-assisted tools have made the boring parts less boring and left more space for the interesting bits. I use them every now and then for chores and such which I would put off otherwise, but there is certainly no shortage of interesting problems that they can't tackle. Now I just have more time to focus on those.
I'll admit that I'd used github copilot while I worked on one of my projects, and I couldn't help it but notice a rather significant cognitive decline whenever I set out to take over and start hammering out the code myself. I just can't allow cognitive declines.
I'm in the same boat, I'm hoping firmware / embedded might be better in this regard due to the inherit constraints. If not then EE is probably the only other option. Anyone else have thoughts on this? I'm craving a more civil engineering approach to rigor rather than the mess of modern software. Perhaps that means software just isn't for me.
I think the last thing this world needs is programmers bringing their particular style of “””engineering””” to important things like bridges. Can’t wait to hear about the 5-9s of uptime on the Golden Gate.
...why do you think this comment is warranted?
Because programmers probably think it'll be a similar field, but it's different. It has correct and incorrect ways of doing things, strongly enforced. You're not inventing new shit, you're reapplying old shit constantly. Old shit that works.
Many think writing software is engineering, but it couldn't be further from the truth.
edit: to clarify, trying to get people to realize the grass isn't always greener, and both sides are better off for it.
How many programmers-turned-X do you have experience with?
I'm definitely wanting to do something with more of a civil engineering approach to rigour. More and more I think software is full of children who don't care and don't know the meaning of responsibility.
Maybe formal methods have a chance of becoming mainstream now [1]?
This would increase the rigor of software engineering and put it on par with civil engineering.
Some niches like real-time embedded systems are already pretty much the same.
[1] https://martin.kleppmann.com/2025/12/08/ai-formal-verificati...
I doubt it, I feel like it might improve shops that already care and are already creating with rigor. I don't think it'll raise the bar for the avg shop. However, perhaps that's just be being cynical. By real time embedded is the same do you mean the same in the sense that they are just as poor in quality?
> [...] the same in the sense that they are just as poor in quality?
I mean some real-time software for critical embedded systems has an incredible level of rigor, making heavy use of static analysis, model checking, and theorem proving.
Noted, perhaps I'll investigate as a possible next career step. Thanks!
Yeah I'm the same, I enjoy thinking of a system of things, doing it, not typing a couple of commands and a bunch of code is generated. It's not the rote process but feeling like I worked for it/doing it. Similar argument is "why buy milk" you can get a cow, milk it yourself, kind of thing. Which I see that, some people don't care what the code looks like, does it work.
Tangent: reality perception is funny, like why hike, you could just watch someone else hike... I think jewelry is worthless (waste of money to buy) but others don't that kind of thing.
Generative AI didn't need to come into existence in order for the proliferation of utterly mind-numbing devops slop that already dominated modern development to suck all the fun out of computers that were originally intended to solve problems for humans.
That's the difference between the original "free software" community and the "open source" one to me, including its modern incarnation on GitHub: the former produces a collaborative effort to develop a UNIX replacement/alternative and a thriving ecosystem around free software desktop environments with apps intended for humans to do their personal computing. The latter gets you Apache Kafka.
I really enjoyed the article and want to both praise and encourage the author.
You get it. It’s about value. Keep you eye on that north star and you won’t go wrong.
Whose value? How do I value? Can I reconcile disparite value? Yep, those are the right questions.
For me, I read this and want to give a shout out to Zen and the Art of Motorcycle Maintenance, but that’s just me.
I enjoyed the read, thank you.
Yes! I have struggled with this concern for a while now.
I think that humans are the ultimate arbiters of quality for humans.
Nothing that is non-human can make that determination because anything that is not human (an LLM) is at best just a really good model. Since all models are necessarily wrong by definition, they can never get it right all the time. When determining what’s good for humans, it is only us that can figure that out.
I also came back to ZAMM when wrestling this question. There must be something there if we have all independently coming to similar conclusions.
As an aside, this is a true meme.
The screenplay part made me think. While it's true that an LLM could potentially generate a better^ script, while writing scriot the author would probably had had many ideas that did not make it to the final draft. Yet those ideas would definitely influence the final product, the movie. There's probably only so much you can put in the script really.
Meaning for the brother is one thing, but as a potential watcher, I would almost always prefer a movie that someone really cared about^^.
^: depending on the definition of "better"
^^: as a fallible human being I am not perfect at detecting that care, but there had definitely been cases in my life when someone was talking about a thing that I would not really care about otherwise, but their passion made the talk extremely interesting and memorable
Whether Nystrom realizes it or not, and I think he does, this piece is shot through with Marxist thinking on use value, exchange value, socially necessary labour time, and the general structure of capital social relations.
Capitalism has been a fantastically productive system that has also produced a great deal of labour alienation. Nystrom has a deep need to labour for those he cares about, he needs to make the scarf, slowly and badly, for his grandmother.
But the socially necessary amount of labour to make a scarf is now extremely small, and so Nystrom labours in software to earn a higher wage.
The wage doesn't fulfil him so much, because it's for labour power directed for the purpose of value valorization (aka. profit), not to help those he cares about.
He's skilled and lucky, so he has plenty of excess after labouring to poorly make a scarf. But if he does not already have plenty of capital, he has to work, and his capital has to be put to work too, on things other than badly making scarves, lest it too whither away.
I don't really think it's the effort, to be honest. A short while ago, I made a Custom GPT for my wife[0] that she thoroughly enjoyed and uses all the time. It didn't take me that long. The value in the thing is that I could see what she wanted and make it like she wanted. And on the receiving side, a friend of ours knitted our daughter, Astra, a quilt with her name on it and with this lovely star motif. It must have taken her and her mother, a seamstress, ages. But if she had done it instantaneously, I think I still would have loved it.
As for the other side of things, there is one thing that gives me a mild twinge of envy: I grew up on the command-line and when I write on it, I can knock out a full bash pipeline literally at the command-line super-fast. Many of my friends are far better engineers, but this one thing makes me great at any sort of debugging and all that. Now everyone has that! I'm USELESS.
Well, not really, but it's funny that this unique skill is meaningless. Overall, I've found that AI stuff has let me do more and more things. Instead of thinking, at the end of the week, "Oh man, I wish I'd made progress on my side project" I think "Damn, that wasn't such a good idea after all" which is honestly far more satisfying!
0: https://wiki.roshangeorge.dev/w/Blog/2025-10-17/Custom_GPTs
The author attributes meaning for the giver and hopefully receiver to time spent by the giver. They argue less time spent for same utility lowers the meaning.
I see something very different:
1. The government post shared as an example of efficiency to utility increase has glaring errors: “journey-level developers”. You will never achieve any improvement on government code bases if the people leading the effort can’t pay attention to the most basic and broadcasted elements of the job. AI used by junior developers will only compound the massive complexity of government systems to the point where they are not fixable by seniors and they are not usable by humans.
2. The time spent doing something, meaningful or not, with care is a training for that person into attention to detail, which is absolutely critical to getting things right. People who lazily lean on generating more without attention to detail, don’t see the real point of the work - it’s not to add more stuff to less space (physical or mental) faster. It’s to make the existing space better and bigger by needing less in it. The rest is just mental and physical complexity overload. We are about to drawn in that overload like hoarders next to a dumpster.
3. If you ever live in a small home you may noticed that the joy of getting things (usually derived from dopamine-seeking behaviors like shopping or making, or shopping for ingredients so you can make things, or from getting toys for your kids or other people getting toys for your kids) will quickly overload any space in your living quarters. Your house becomes unbearably small and it becomes impossible to find things in piles or drawers filled with other things if nobody ever organizes them. We all have become dopamine adduces or the world has turned us into such and there are few if any humans willing and capable of organizing that world. So most people today will be paralyzed with a million choices that were never organized or pruned down by the owner or their predecessors. The overwhelming feeling would be to escape the dread of organization into more dopamine- generating behaviors. We need more of the “moms” who clean up our rooms after we’ve had fun with all the legos we can now generate. Or we will all be living in a dumpster before too long.
Journey-level would just be 'senior', no? I'm not familiar with the terminology but it seems akin to journeyman (i.e. competent executor of a skillset). A junior would be apprentice in that universe, I imagine.
Let's cut through some of the noise here. We only talk about these things behause we fear becoming obsolete and losing our livelihoods.
Using the scarf example - nobody's paying you to knit it. Knitting it probably won't put food on your table. Maybe for tribal humans that kind of thing did, hence the psychological reward wiring, but not now.
Here's a reality: we do all become obsolete. It's called aging. I don't know how exactly to fit that into that puzzle, but my brain told me it's a relevant reference point, somehow.
A hiring company just wants you to make the thing with an optimal balance between quality and efficiency.
As I see it, one has a few options. A common one is just hoping everything will be okay. Often that turns out to be the case.
Another one is to proactively 'adapt or die'. Master the new way of making the thing even if it tastes bitter. Harder to do with age, but in some sense the obvious choice if you want to be competitive.
Speaking of, I think we often forget that - the world is one big competition for resources and survival. Happiness is a luxury, but we've converted it into a requirement.
I'm not saying I like any of this - I want to be secure for life following the same old patterns I've already learned as well.
But I consider that thought a comforting fantasy to be eyed with suspicion.
That said, thankfully, AI is still pretty good at enshittifying things, so I have a suspicion that one may not need to always adapt that enormously. I work with a lot of legacy software, and have seen that a lot of companies are full of old school tech debt for which hands on programming is still a must have.
Claude code or what have you is a little limited when you're writing code for obscure software packages which nobody knows the name of anymore, but on which some companies still depend for their core business logic.
Will it all dry up eventually? Sure. But slower than we expect methinks.
I love Nystrom's writing, and he's so good at it because he's written so much. A huge part of the value of things is how we grow in the making of them, and I worry that in a world where we accept generative slop, we'll never have the opportunity to woodshed enough to become excellent at a craft.
I'm a good engineer because I've written tons of code, I've taken no shortcuts, and I've focused on improving over my many iterations. This has enabled me to be an effective steward of generative coding (etc) models, but will younger engineers ever get the reps necessary to get where I am? Are there other ways to get this knowledge and taste? Does anyone know or care?
We're in the anthropocene now, and while probably everyone who knows what that is understands we have the largest effect on the Earth, it also means we now also have the largest effect on ourselves. We're so, so bad at taking this seriously. We can unleash technology that idiocracies western civilization inside of a generation, I know this because we keep lunging towards it with ever increasing success. But we can't just shamble around and let Darwin awards sort things out. We have nukes and virology labs, not to mention a climate change crisis to deal with. If the US political system falls apart because Americans under 65 spend between 2-3 hours on social media a day, that's a failed state with a lot of firepower to shoot around haphazardly.
And why do we keep building things that enfeeble us? Did we need easier access to delivery food and car rides, or did we need easier access to nutritious food and more walkable neighborhoods? Did we need social media with effectively no protections against propaganda/misinformation? We know that cognitive ability and executive function decline with LLM use. Can it really be that we think we're actually too smart and we need to turn it down a notch?
There are actual problems to solve, and important software to write. Neither algorithmic feeds nor advertising platforms fall under those categories. LLMs are supposed to solve the problem of "not enough software"--Nystrom points at this explicitly with the Washington Department of Ecology ad. But we never had a "not enough software problem", rather we had a "not enough beneficial software" problem, i.e. we'd be in a way better place if our best minds weren't working on getting more eyeballs on more ads or getting the AI to stop undressing kids.
Generative AI isn't empowering us. We don't have people building their own OSes, (real, working) browsers, word processors and spreadsheet programs, their own DAWs or guitar amp modelers, their own Illustrators or Figmas. Instead you have companies squeezing their workers and contractors, while their products enshittify. You can't even run these things without some megacorp's say so, and how are you gonna buy time on the H100 farm when AI took your job?
I'm too tired to write a conclusion. I'm pretty sure we're fucked. But hey look, the cars drive themselves.
> I've written tons of code, I've taken no shortcuts, and I've focused on improving over my many iterations. This has enabled me to be an effective steward of generative coding (etc) models, but will younger engineers ever get the reps necessary to get where I am? Are there other ways to get this knowledge and taste?
I think this will be a problem in the middle term, and I've written about such deskilling before [0]. With the latest crop of foundational coding models and harnesses, and more progress on the way, I'm beginning to wonder if it will matter? If there's a future where agents are designing the code, implementing the code, and reading and reviewing the code... At that point the code is no longer the thing. "Software engineers" will continue to sit at the interface of product and software, but the software will be writing itself. Of course there will be a need for programmers who can actually read and write computer code, the same way there's a need for Fortran and compiler devs today.
The skill that all software engineers will need to learn, regardless of level, is how to leverage commoditized reasoning to build products effectively.
- how to design systems declaratively and in terms of requirements and constraints
- how to configure the systems in such a way that they're automatically testable end-to-end
- how to move tacit knowledge out of people's heads and into the context (all of our meetings will be transcribed; questions from the agent will be generated during the meeting resolve ambiguity; the agent will be an omnipresent attendee in all meetings: "Agent: The topic you're discussing overlaps with what Sally said three days ago when she met with Mike. They covered xyz..."; companies that follow remote work best practices will have an advantage here)
- how to allocate and orchestrate teams of people and agents
[0]: https://matthewbilyeu.com/blog/2025-03-08/ai
> And why do we keep building things that enfeeble us?
To offer a view which is, well, a different flavor of pessimism: The good news is that we aren't enfeebling ourselves, the bad news is many humans are being enfeebled by a smaller group of humans as a form of economic predation.
This is a really good piece especially the part on what and why makes sense to build with AI or not