This list of things not to use AI for is so quaint. There's a story on the front page right now from The Atlantic: "Film students who can no longer sit through films". But why? Aren't they using social media, YouTube, Netflix, etc responsibly? Surely they know the risks, and surely people will be just as responsible with AI, even given the enormous economic and professional pressures to be irresponsible.
What is the lesson in the anecdote about film students? To me, it’s that people like the idea of studying film more than they like actually studying film. I fail to see the connection to social media or AI.
> Surely they know the risks, and surely people will be just as responsible with AI
I can't imagine even half of students can understand the short and long term risk of using social media and AI intensively.
At least I couldn't when I was a student.
Recently a side discussion came up - people in the Western world are "rediscovering" fermented, and pickled, foods that are still in heavy use in Asian cultures.
Fermentation was a great way to /preserve/ food, but it can be a bit hit and miss. Pickling can be outright dangerous if not done correctly - botulism is a constant risk.
When canning of foods came along it was a massive game changer, many foods became shelf stable for months or years.
Fermentation and pickling was dropped almost universally (in the West).
It's a funnily relevant parallel you're making, because designing everything around the car has absolutely been one of the biggest catastrophes of 2nd half of the 20th century. Much like "AI" in the past couple years, the personal automobile is a useful tool but making anything and everything subservient towards its use has had catastrophic consequences.
It is political. Designing everything around cars benefits the class of people called "Car Owners". Not so much people who don't have the money or desire to buy a car.
Although, congestion pricing is a good counter-example. On the surface it looks like it is designed to benefit users of public transportation. But turns out it also benefits car-owners, because it reduces traffic jams and lets you get to your destination with your own car faster.
No, it benefits car manufacturers and sellers, and mechanics and gas stations.
Network/snowball effects are not all good. If local businesses close because everybody drives to WalMart to save a buck, now other people around those local businesses also have to buy a car.
I remember a couple of decades ago when some bus companies in the UK were privatized, and they cut out the "unprofitable" feeder routes.
Guess what? More people in cars, and those people didn't just park and take the bus when they got to the main route, either.
My fundamental argument: The way the average person is using AI today is as "Thinking as a Service" and this is going to have absolutely devastating long term consequences, training an entire generation not to think for themselves.
There's an Isaac Asimov story where people are "educated" by programming knowledge into their brains, Matrix style.
A certain group of people have something wrong with their brain where they can't be "educated" and are forced to learn by studying and such. The protagonist of the story is one of these people and feels ashamed at his disability and how everyone around him effortlessly knows things he has to struggle to learn.
He finds out (SPOILER) that he was actually selected for a "priesthood" of creative/problem solvers, because the education process gives knowledge without the ability to apply it creatively. It allows people to rapidly and easily be trained on some process but not the ability to reason it out.
I believe that collectively we passed that point long before the onset of LLMs. I have a feeling that throughout the human history vast amounts of people ware happy to outsource their thinking and even pay to do so. We just used to call those arrangements religions.
Religions may outsource opinions on morality, but no one went to their spiritual leader to ask about the Pythagorean theorem or the population of Zimbabwe.
That’s a bit cynical. Religion is more like a technology. It was continuously invented to solve problems and increase capacity. Newer religions superseded older and survived based on productive and coercive supremacy.
If religion is a technology, it's inarguably one that prevented the development of a lot of other technologies for long periods of time. Whether that was a good thing is open to interpretation.
On the other hand it produced a lot of related technology. Calendars, mathematics, writing, agricultural practices, government and economic systems. Most of this stuff emerged as an effort to document and proliferate spiritual ideas.
I see your point, but I'd say religion's main technological purpose is as a storage system for the encoding of other technologies (and social patterns) into rituals, the reasons for which don't need to be understood; to the point that it actively discourages examination of their reasons, as what we could call an error-checking protocol. So a religion tends to freeze those technologies in the time at the point of inception, and to treat any reexamining of them as heresy. Calendars are useful for iron age farming, but you can't get past a certain point as a civilization if you're unwilling to reconsider your position that the sun and stars revolve around the earth, for example.
That would have devastating consequences in the pre-LLM era, yes. What is less obvious is whether it'll be an advantage or disadvantage going forward. It is like observing that cars will make people fat and lazy and have devastating consequences on health outcomes - that is exactly what happened but the net impact was still positive because cars boost wealth, lifestyles and access to healthcare so much that the net impact is probably positive even if people get less exercise.
It is unclear that a human thinking about things is going to be an advantage in 10, 20 years. Might be, might not be. In 50 years people will probably be outraged if a human makes an important decision without deferring to an LLM's opinion. I'm quite excited that we seem to be building scaleable superintelligences that can patiently and empathetically explain why people are making stupid political choices and what policy prescriptions would actually get a good outcome based on reading all the available statistical and theoretical literature. Screw people primarily thinking for themselves on that topic, the public has no idea.
Eh 1953 was more about what’s going to happen to the people left behind, e.g. Childhood’s End. The vast majority of people will be better off having the market-winning AI tell them what to do.
Or how about that vast majority gets a decent education and higher standard of living so they can spend time learning and thinking on their own? You and a lot of folks seem to take for granted our unjust economy and its consequences, when we could easily change it.
How is that relevant? You can give whatever support you like to humans, but machine learning is doing the same thing in general cognition that it has done in every competitive game. It doesn't matter how much education the humans get - if they try to make complex decisions using their brain then, silicon will outperform them at planning to achieve desirable outcomes. Material prosperity is a desirable outcome, machines will be able to plot a better path to it than some trained monkey. The only question is how long it'll take to resolve the engineering challenges.
I think you hit the nail on the head. Without years of learning by doing, experience in the saddle as you put it, who would be equipped to judge or edit the output of AI? And as knowledge workers with hands-on experience age out of the workforce, who will replace us?
The critical difference between AI and a tool like a calculator, to me, is that a calculator's output is accurate, deterministic and provably true. We don't usually need to worry that a calculator might be giving us the wrong result, or an inferior result. It simply gives us an objective fact. Whereas the output of LLMs can be subjectively considered good or bad - even when it is accurate.
So imagine teaching an architecture student to draw plans for a house, with a calculator that spit out incorrect values 20% of the time, or silently developed an opinion about the height of countertops. You'd not just have a structurally unsound plan, you'd also have a student who'd failed to learn anything useful.
> The critical difference between AI and a tool like a calculator, to me, is that a calculator's output is accurate, deterministic and provably true.
This really resonates with me.
If calculators returned even 99.9% correct answers, it would be impossible to reliably build even small buildings with them.
We are using AI for a lot of small tasks inside big systems, or even for designing the entire architecture, and we still need to validate the answers by ourselves, at least for the foreseeable future.
But outsourcing thinking reduces a lot of brain powers to do that, because it often requires understanding problems' detailed structure and internal thinking path.
In current situation, by vibing and YOLOing most problems, we are losing the very ability we still need and can't replace with AI or other tools.
There's another category error compounding this issue: People think that because past revolutions in technology eventually led to higher living standards after periods of disruption, this one will too. I think this one is the exception for the reasons enumerated by the parent's blog post.
It's funny, I'm working on trying to get LLMs to place electrical devices, and it silently developed opinions that my switches above countertops should be at 4 feet and not the 3'10 I'm asking for (the top cannot be above 4')
That's quite funny, and almost astonishing, because I'm not an architect, and that scenario just came out of my head randomly as I wrote it. It seemed like something an architect friend of mine who passed away recently, and was a big fan of Douglas Adams, would have joked about. Maybe I just channeled him from the afterlife, and maybe he's also laughing about it.
On the whole, not trusting one's own tools is a regression, not an advancement. The cognitive load it imposes on even the most capable and careful person can lead to all sorts of downstream effects.
I'll say that I'm still kinda on the fence here, but I will point out that your argument is exactly the same as the argument against calculators back in the 70s/80s, computers and the internet in the 90s, etc.
You could argue that a lot of the people who few up with calculators have lost any kind of mathematical intuition. I am always horrified how bad a lot of people are with simple math, interest rates and other things. This definitely opened up a lot of opportunities for companies to exploit this ignorance.
The difference is a calculator always returns 2+2=4. And even then if you ended up with 6 instead of 4, the fact you know how to do addition already leads you to believe you fat fingered the last entry and that 2+2 does not equal 6.
Can’t say the same for LLM. Our teachers were right with the internet of course as well. If you remember those early internet wild west school days, no one was using the internet to actually look up a good source. No one even knew what that meant. Teachers had to say “cite from these works or references we discussed in class” or they’d get junk back.
Too late. Outsourcing has already accomplished this.
No one is making cool shit for themselves. Everyone is held hostage ensuring Wall Street growth.
The "cross our fingers and hope for the best" position we find ourselves in politically is entirely due to labor capture.
The US benefited from a social network topology of small businesses. No single business being a lynch pin that would implode everything.
Now the economy is a handful of too big to fails eroding links between human nodes by capturing our agency.
I argued as hard as I could against shipping electronics manufacturing overseas so the next generation would learn real engineering skills. But 20 something me had no idea how far up the political tree the decision was made back then. I helped train a bunch of people's replacements before the telecom focused network hardware manufacturer I worked for then shut down.
American tech workers are now primarily cloud configurators and that's being automated away.
This is a decades long play on the part of aging leadership to ensure Americans feel their only choice is capitulate.
What are we going to do, start our own manufacturing business? Muricans are fish in a barrel.
The interesting axis here isn’t how much cognition we outsource, it’s how reversible the outsourcing is. Using an LLM as a scratchpad (like a smarter calculator or search engine) is very different from letting it quietly shape your writing, decisions, and taste over years. That’s the layer where tacit knowledge and identity live, and it’s hard to get back once the habit forms.
We already saw a softer version of this with web search and GPS: people didn’t suddenly forget how to read maps, but schools and orgs stopped teaching it, and now almost nobody plans a route without a blue dot. I suspect we’ll see the same with writing and judgment: the danger isn’t that nobody thinks, it’s that fewer people remember how.
Humans are highly adaptable. It's hard to go back while the thing we're used to still exists, but if it vanished from the world we'd adapt within a few weeks.
Yet it does feel different with LLMs compared to your examples. Yes, people can’t navigate without Apple/Google maps, but that’s still very different from losing critical thinking skills.
That said, LLMs are perhaps accelerating that but aren’t the only cause (lack of reading, more short form content, etc)
The “lump of cognition” framing misses something important. it’s not about how much thinking we do, but which thinking we stop doing. A lot of judgment, ownership, and intuition comes from boring or repetitive work, and outsourcing that isn’t free. Lowering the cost of producing words clearly isn’t the same as increasing the amount of actual thought.
Looking at the words that get produced at this lowered cost, and observing how satisfactory they apparently are to most people (and observing the simplicity of the heuristics people use to try to root out "cheap" words), has been quite instructive (and depressing).
I'm grateful that I spent a significant part of my life forced to solve problems and forced to struggle to produce the right words. In hindsight I know that that's where all the learning was. If I'd had a shortcut machine when I was young I'd have used it all the time, learned much less, and grown up dependent on it.
I'd argue that choosing words is a key skill because language is one of our tools for examining ideas and linking together parts of our brains in new ways.
Even just writing notes you'll never refer to again, you're making yourself codify vaguer ideas or impressions, test assumptions, and then compress the concept for later. It's an new external information channel between different regions of your head which seems to provide value.
The main difference is that the computer you use for writing is not requiring you to pay for every word. And that's the difference in the business models being pushed right now all around the world.
I like this imaginary world you propose that gives free computers, free electricity, a free place to store it, and is free from danger from other tribes.
If an AI thinks for you, you're no longer "outsourcing" parts of your mind. What we call "AI" now is technically impressive but is not the end point for where AI is likely to end up. For example, imagine an AI that is smart enough to emotionally manipulate you, at what point in this interaction do you lose your agency to "outsource" yourself instead of acting as a conduit to "outsource" the thoughts of an artificial entity? It speaks to our collective hubris that we seek to create an intellectually superior entity and yet still think we'll maintain control over it instead of the other way around.
How many of you know how to do home improvement? Fix your own clothes? Grow your own food? Cook your own food? How about making a fire or shelter? People used to know all of those things. Now they don't, but we seem to be getting along in life fine anyway. Sure we're all frightened by the media at the dangers lurking from not knowing more, but actually our lives are fine.
The things that are actually dangerous in our lives? Not informing ourselves enough about science, politics, economics, history, and letting angry people lead us astray. Nobody writes about that. Instead they write about spooky things that can't be predicted and shudder. It's easier to wonder about future uncertainty than deal with current certainty.
Executive function is not the same as weaving or carpentry. The scary problem comes from people who are trying to abdicate their entire understand-and-decide phase to an outside entity.
What's more, that's not fundamentally a new thing, it's always been possible for someone to helplessly cling to another human as their brain... but we've typically considered that to be a mental-disorder and/or abuse.
Systems used to be robust, now they’re fragile due to extreme outsourcing and specialization. I challenge the belief that we’re getting along fine. I argue systems are headed to failure, because of over optimization that prioritized output over resilience.
I still read the LLMs output quite critically and I cringe whenever I do. LLMs are just plain wrong a lot of the time. They’re just not very intelligent. They’re great at pretending to be intelligent. They imitate intelligence. That is all they do. And I can see it every single time I interact with them. And it terrifies me that others aren’t quite as objective.
I usually feed my articles to it and ask for insight into whats working. I usually wait to initiate any sort of AI insight until my rough draft is totally done...
Working in this manner, it is so painfully clear it doesnt really follow the flow of the article even. It misses on so many critical details and just sorta fills in its own blanks wrong... When you tell it that its missing a critical detail, it treats you like some genius, every single time.
It is hard for me to try to imagine growing up with it, and using it to write my own words for me. The only time i copy paste words to a fellow human that is ai generated, is for totally generic customer service style replies, for questions i dont totally consider worthy of any real time.
AI has kinda taken away my flow state for coding, rare as it was... I still get it when writing stuff I am passionate about, and I can't imagine I'll ever wanna outsource that.
> And it terrifies me that others aren’t quite as objective.
I have been reminded constantly throughout this that a very large fraction of people are easily impressed by such prose. Skill at detecting AI output (in any given endeavour), I think, correlates with skill at valuing the same kind of work generally.
Put more bluntly: slop is slop, and it has been with us for far longer than AI.
To his point: personally, I find it shifts 'where and when' I have to deal with the 'cognitive load'. I've noticed (at times) feeling more impatient, that I tend to skim the results more often, and that it takes a bit more mental energy to maintain my attention..
A lot of this stuff depends on how a person chooses to engage, but my contrarian take is that actually throughout history whenever anyone said X technology will lead to the downfall of humanity for y reasons, that take was usually correct.
The article he references gives this example:
“Is it lazy to watch a movie instead of making up a story in your head?”
Yes, yes it is, this was a worry when we transitioned from oral culture to written culture, and I think it was probably prescient.
For many if not most people cultural or technological expectations around what skills you _have_ to learn probably have an impact on total capability. We probably lost something when Google Maps came out and the average person didn’t have to learn to read a map.
When we transitioned from paper and evening news to 24 hour partisan cable news, I think more people outsourced their political opinions to those channels.
> Social media has given me a rather dim view of the quality of people's thinking, long before AI. Outsourcing it could well be an improvement.
Cogito, ergo sum
The corollary is: absence of thinking equals non-existence. I don't see how that can be an improvement. Improvement can happen only when it's applied to the quality of people's thinking.
The converse need not hold. Cognition implies existence; it is sufficient but not necessary. Plenty of things exist without thinking.
(And that's not what the Cogito means in the first place. It's a statement about knowledge: I think therefore it is a fact that I am. Descartes is using it as the basis of epistemology; he has demonstrated from first principles that at least one thing exists.)
I know the trivialities. I didn't intend to make a general or formal statement, we're talking about people. In a competitive world, those who've been reduced to idiocracy won't survive, AI not only isn't going to help them, it will be used against them.
> Plenty of things exist without thinking.
Existence in an animal farm isn't human existence.
Thinking developed naturally as a tool that helps our species to stay dominant on the planet, at least on land. (Not by biomass but by the ability to control.)
If outsourcing thought is beneficial, those who practice it will thrive; if not, they will eventually cease to practice it, one way or another.
Thought, as any other tool, is useful when it solves more problems than it creates. For instance, an ability to move very fast may be beneficial if it gets you where you want to be, and detrimental, if it misses the destination often enough, and badly enough. Similarly, if outsourced intellectual activities miss the mark often enough, and badly enough, the increased speed is not very helpful.
I suspect that the best results would be achieved by outsourcing relatively small intellectual acts in a way that guarantees very rare, very small errors. That is, AI will become useful when AI becomes dependable, comparable to our other tools.
> If outsourcing thought is beneficial, those who practice it will thrive
It makes them prey to and dependent on those who are building and selling them the thinking.
> I suspect that the best results would be achieved by outsourcing relatively small intellectual acts in a way that guarantees very rare, very small errors. That is, AI will become useful when AI becomes dependable, comparable to our other tools.
That's like saying ultra processed foods provide the best results when eaten sparingly, so it will become useful when people adopt overall responsible diets. Okay, sure, but what does that matter in practice since it isn't happening?
This list of things not to use AI for is so quaint. There's a story on the front page right now from The Atlantic: "Film students who can no longer sit through films". But why? Aren't they using social media, YouTube, Netflix, etc responsibly? Surely they know the risks, and surely people will be just as responsible with AI, even given the enormous economic and professional pressures to be irresponsible.
> Film students who can no longer sit through films
Everyone loves watching films until they get a curriculum with 100 of them along with a massive reading list, essays, and exams coming up.
What is the lesson in the anecdote about film students? To me, it’s that people like the idea of studying film more than they like actually studying film. I fail to see the connection to social media or AI.
AI performs strictly in the Platonic world, as is the social media experience. As is the film student.
Yikes, that was too real
> surely people will be just as responsible with AI
That's exactly what worries us.
Perhaps the films were weren't worth sitting through?
Recently a side discussion came up - people in the Western world are "rediscovering" fermented, and pickled, foods that are still in heavy use in Asian cultures.
Fermentation was a great way to /preserve/ food, but it can be a bit hit and miss. Pickling can be outright dangerous if not done correctly - botulism is a constant risk.
When canning of foods came along it was a massive game changer, many foods became shelf stable for months or years.
Fermentation and pickling was dropped almost universally (in the West).
We lose something when we give up horses for cars.
Have too many of us outsourced our ability to raise horses for transport?
Surely you're capable of walking all day without break?
It's a funnily relevant parallel you're making, because designing everything around the car has absolutely been one of the biggest catastrophes of 2nd half of the 20th century. Much like "AI" in the past couple years, the personal automobile is a useful tool but making anything and everything subservient towards its use has had catastrophic consequences.
It is political. Designing everything around cars benefits the class of people called "Car Owners". Not so much people who don't have the money or desire to buy a car.
Although, congestion pricing is a good counter-example. On the surface it looks like it is designed to benefit users of public transportation. But turns out it also benefits car-owners, because it reduces traffic jams and lets you get to your destination with your own car faster.
>Designing everything around cars benefits the class of people called "Car Owners".
Designing everything around cars hurts everyone including car owners. Having no option but to drive everywhere just sucks.
No, it benefits car manufacturers and sellers, and mechanics and gas stations.
Network/snowball effects are not all good. If local businesses close because everybody drives to WalMart to save a buck, now other people around those local businesses also have to buy a car.
I remember a couple of decades ago when some bus companies in the UK were privatized, and they cut out the "unprofitable" feeder routes.
Guess what? More people in cars, and those people didn't just park and take the bus when they got to the main route, either.
I actually wrote up quite a few thoughts related to this a few days ago but my take is far more pessimistic: https://www.neilwithdata.com/outsourced-thinking
My fundamental argument: The way the average person is using AI today is as "Thinking as a Service" and this is going to have absolutely devastating long term consequences, training an entire generation not to think for themselves.
There's an Isaac Asimov story where people are "educated" by programming knowledge into their brains, Matrix style.
A certain group of people have something wrong with their brain where they can't be "educated" and are forced to learn by studying and such. The protagonist of the story is one of these people and feels ashamed at his disability and how everyone around him effortlessly knows things he has to struggle to learn.
He finds out (SPOILER) that he was actually selected for a "priesthood" of creative/problem solvers, because the education process gives knowledge without the ability to apply it creatively. It allows people to rapidly and easily be trained on some process but not the ability to reason it out.
Do you remember the title of that story, by chance?
Profession (1957)
https://en.wikipedia.org/wiki/Profession_(novella)
I believe that collectively we passed that point long before the onset of LLMs. I have a feeling that throughout the human history vast amounts of people ware happy to outsource their thinking and even pay to do so. We just used to call those arrangements religions.
Religions may outsource opinions on morality, but no one went to their spiritual leader to ask about the Pythagorean theorem or the population of Zimbabwe.
Well, now, that's not actually true:
[1] https://plato.stanford.edu/entries/pythagoreanism/ [2] https://en.wikipedia.org/wiki/Pythia
That’s a bit cynical. Religion is more like a technology. It was continuously invented to solve problems and increase capacity. Newer religions superseded older and survived based on productive and coercive supremacy.
If religion is a technology, it's inarguably one that prevented the development of a lot of other technologies for long periods of time. Whether that was a good thing is open to interpretation.
On the other hand it produced a lot of related technology. Calendars, mathematics, writing, agricultural practices, government and economic systems. Most of this stuff emerged as an effort to document and proliferate spiritual ideas.
I see your point, but I'd say religion's main technological purpose is as a storage system for the encoding of other technologies (and social patterns) into rituals, the reasons for which don't need to be understood; to the point that it actively discourages examination of their reasons, as what we could call an error-checking protocol. So a religion tends to freeze those technologies in the time at the point of inception, and to treat any reexamining of them as heresy. Calendars are useful for iron age farming, but you can't get past a certain point as a civilization if you're unwilling to reconsider your position that the sun and stars revolve around the earth, for example.
That would have devastating consequences in the pre-LLM era, yes. What is less obvious is whether it'll be an advantage or disadvantage going forward. It is like observing that cars will make people fat and lazy and have devastating consequences on health outcomes - that is exactly what happened but the net impact was still positive because cars boost wealth, lifestyles and access to healthcare so much that the net impact is probably positive even if people get less exercise.
It is unclear that a human thinking about things is going to be an advantage in 10, 20 years. Might be, might not be. In 50 years people will probably be outraged if a human makes an important decision without deferring to an LLM's opinion. I'm quite excited that we seem to be building scaleable superintelligences that can patiently and empathetically explain why people are making stupid political choices and what policy prescriptions would actually get a good outcome based on reading all the available statistical and theoretical literature. Screw people primarily thinking for themselves on that topic, the public has no idea.
If you told me this was a verbatim cautionary sci-fi short story from 1953 I'd believe it.
Perhaps Asimov in 1958?
https://en.wikipedia.org/wiki/The_Feeling_of_Power
That said, I maintain there are huge qualitative differences between using a calculator versus "hey computer guess-solve this mess of inputs for me."
At long last, we have created the Torment Nexus from classic sci-fi novel "Don't Create The Torment Nexus"!
Eh 1953 was more about what’s going to happen to the people left behind, e.g. Childhood’s End. The vast majority of people will be better off having the market-winning AI tell them what to do.
Or how about that vast majority gets a decent education and higher standard of living so they can spend time learning and thinking on their own? You and a lot of folks seem to take for granted our unjust economy and its consequences, when we could easily change it.
How is that relevant? You can give whatever support you like to humans, but machine learning is doing the same thing in general cognition that it has done in every competitive game. It doesn't matter how much education the humans get - if they try to make complex decisions using their brain then, silicon will outperform them at planning to achieve desirable outcomes. Material prosperity is a desirable outcome, machines will be able to plot a better path to it than some trained monkey. The only question is how long it'll take to resolve the engineering challenges.
That is absurd and is not supported by any facts
You'd make a great dictator.
I think you hit the nail on the head. Without years of learning by doing, experience in the saddle as you put it, who would be equipped to judge or edit the output of AI? And as knowledge workers with hands-on experience age out of the workforce, who will replace us?
The critical difference between AI and a tool like a calculator, to me, is that a calculator's output is accurate, deterministic and provably true. We don't usually need to worry that a calculator might be giving us the wrong result, or an inferior result. It simply gives us an objective fact. Whereas the output of LLMs can be subjectively considered good or bad - even when it is accurate.
So imagine teaching an architecture student to draw plans for a house, with a calculator that spit out incorrect values 20% of the time, or silently developed an opinion about the height of countertops. You'd not just have a structurally unsound plan, you'd also have a student who'd failed to learn anything useful.
In current situation, by vibing and YOLOing most problems, we are losing the very ability we still need and can't replace with AI or other tools.
> If calculators returned even 99.9% correct answers, it would be impossible to reliably build even small buildings with them.
I think past successes have led to a category error in the thinking of a lot of people.
For example, the internet, and many constituent parts of the internet, are built on a base of fallible hardware.
But mitigated hardware errors, whether equipment failures, alpha particles, or other, are uncorrelated.
If you had three uncorrelated calculators that each worked 99.99% of the time, and you used them to check each other, you'd be fine.
But three seemingly uncorrelated LLMs? No fucking way.
There's another category error compounding this issue: People think that because past revolutions in technology eventually led to higher living standards after periods of disruption, this one will too. I think this one is the exception for the reasons enumerated by the parent's blog post.
The LLMs are not uncorrelated, though, they're all trained on the same dataset (the Internet) and subject to most of the same biases
It's funny, I'm working on trying to get LLMs to place electrical devices, and it silently developed opinions that my switches above countertops should be at 4 feet and not the 3'10 I'm asking for (the top cannot be above 4')
That's quite funny, and almost astonishing, because I'm not an architect, and that scenario just came out of my head randomly as I wrote it. It seemed like something an architect friend of mine who passed away recently, and was a big fan of Douglas Adams, would have joked about. Maybe I just channeled him from the afterlife, and maybe he's also laughing about it.
On the other hand the incorrect values may drive architects to think more critically about what their tools are producing.
On the whole, not trusting one's own tools is a regression, not an advancement. The cognitive load it imposes on even the most capable and careful person can lead to all sorts of downstream effects.
I'll say that I'm still kinda on the fence here, but I will point out that your argument is exactly the same as the argument against calculators back in the 70s/80s, computers and the internet in the 90s, etc.
You could argue that a lot of the people who few up with calculators have lost any kind of mathematical intuition. I am always horrified how bad a lot of people are with simple math, interest rates and other things. This definitely opened up a lot of opportunities for companies to exploit this ignorance.
The difference is a calculator always returns 2+2=4. And even then if you ended up with 6 instead of 4, the fact you know how to do addition already leads you to believe you fat fingered the last entry and that 2+2 does not equal 6.
Can’t say the same for LLM. Our teachers were right with the internet of course as well. If you remember those early internet wild west school days, no one was using the internet to actually look up a good source. No one even knew what that meant. Teachers had to say “cite from these works or references we discussed in class” or they’d get junk back.
To some extent, the argument against calculators is perfectly valid.
The cash register says you owe $16.23, you give the cashier $21.28, and all hell breaks loose.
Too late. Outsourcing has already accomplished this.
No one is making cool shit for themselves. Everyone is held hostage ensuring Wall Street growth.
The "cross our fingers and hope for the best" position we find ourselves in politically is entirely due to labor capture.
The US benefited from a social network topology of small businesses. No single business being a lynch pin that would implode everything.
Now the economy is a handful of too big to fails eroding links between human nodes by capturing our agency.
I argued as hard as I could against shipping electronics manufacturing overseas so the next generation would learn real engineering skills. But 20 something me had no idea how far up the political tree the decision was made back then. I helped train a bunch of people's replacements before the telecom focused network hardware manufacturer I worked for then shut down.
American tech workers are now primarily cloud configurators and that's being automated away.
This is a decades long play on the part of aging leadership to ensure Americans feel their only choice is capitulate.
What are we going to do, start our own manufacturing business? Muricans are fish in a barrel.
And some pretty well connected people are hinting at similar sense of what's wrong: https://www.barchart.com/story/news/36862423/weve-done-our-c...
The interesting axis here isn’t how much cognition we outsource, it’s how reversible the outsourcing is. Using an LLM as a scratchpad (like a smarter calculator or search engine) is very different from letting it quietly shape your writing, decisions, and taste over years. That’s the layer where tacit knowledge and identity live, and it’s hard to get back once the habit forms.
We already saw a softer version of this with web search and GPS: people didn’t suddenly forget how to read maps, but schools and orgs stopped teaching it, and now almost nobody plans a route without a blue dot. I suspect we’ll see the same with writing and judgment: the danger isn’t that nobody thinks, it’s that fewer people remember how.
> it’s hard to get back once the habit forms.
Humans are highly adaptable. It's hard to go back while the thing we're used to still exists, but if it vanished from the world we'd adapt within a few weeks.
Yet it does feel different with LLMs compared to your examples. Yes, people can’t navigate without Apple/Google maps, but that’s still very different from losing critical thinking skills.
That said, LLMs are perhaps accelerating that but aren’t the only cause (lack of reading, more short form content, etc)
How is navigation not critical thinking? Anyone Should! Be able to use a map to plan a route. Navigation is critical to survival imo
The “lump of cognition” framing misses something important. it’s not about how much thinking we do, but which thinking we stop doing. A lot of judgment, ownership, and intuition comes from boring or repetitive work, and outsourcing that isn’t free. Lowering the cost of producing words clearly isn’t the same as increasing the amount of actual thought.
Looking at the words that get produced at this lowered cost, and observing how satisfactory they apparently are to most people (and observing the simplicity of the heuristics people use to try to root out "cheap" words), has been quite instructive (and depressing).
I'm grateful that I spent a significant part of my life forced to solve problems and forced to struggle to produce the right words. In hindsight I know that that's where all the learning was. If I'd had a shortcut machine when I was young I'd have used it all the time, learned much less, and grown up dependent on it.
I'd argue that choosing words is a key skill because language is one of our tools for examining ideas and linking together parts of our brains in new ways.
Even just writing notes you'll never refer to again, you're making yourself codify vaguer ideas or impressions, test assumptions, and then compress the concept for later. It's an new external information channel between different regions of your head which seems to provide value.
Outsourcing to thinking is exactly what I tell our developers. They are hired to do the kind of thinking I’d rather not do.
Some of humanity’s most significant inventions are language (symbolic communication), writing, the scientific method, electricity, the computer.
Notice something subtle.
Early inventions extend coordination. Middle inventions extend memory. Later inventions extend reasoning. The latest inventions extend agency.
This suggests that human history is less about tools and more about outsourcing parts of the mind into the world.
The main difference is that the computer you use for writing is not requiring you to pay for every word. And that's the difference in the business models being pushed right now all around the world.
I like this imaginary world you propose that gives free computers, free electricity, a free place to store it, and is free from danger from other tribes.
Sign me up for this utopia.
If an AI thinks for you, you're no longer "outsourcing" parts of your mind. What we call "AI" now is technically impressive but is not the end point for where AI is likely to end up. For example, imagine an AI that is smart enough to emotionally manipulate you, at what point in this interaction do you lose your agency to "outsource" yourself instead of acting as a conduit to "outsource" the thoughts of an artificial entity? It speaks to our collective hubris that we seek to create an intellectually superior entity and yet still think we'll maintain control over it instead of the other way around.
> we seek to create an intellectually superior entity and yet still think we'll maintain control over it instead of the other way around.
Intellect is not the same thing as volition.
There's a parallel there to drugs. They are most definitely not "intelligent", yet they can still destroy our agency or free-will.
How many of you know how to do home improvement? Fix your own clothes? Grow your own food? Cook your own food? How about making a fire or shelter? People used to know all of those things. Now they don't, but we seem to be getting along in life fine anyway. Sure we're all frightened by the media at the dangers lurking from not knowing more, but actually our lives are fine.
The things that are actually dangerous in our lives? Not informing ourselves enough about science, politics, economics, history, and letting angry people lead us astray. Nobody writes about that. Instead they write about spooky things that can't be predicted and shudder. It's easier to wonder about future uncertainty than deal with current certainty.
Executive function is not the same as weaving or carpentry. The scary problem comes from people who are trying to abdicate their entire understand-and-decide phase to an outside entity.
What's more, that's not fundamentally a new thing, it's always been possible for someone to helplessly cling to another human as their brain... but we've typically considered that to be a mental-disorder and/or abuse.
> How many of you know how to [...] cook your own food?
That's a very low bar. I expect most people know how to cook, at least simple dishes.
Systems used to be robust, now they’re fragile due to extreme outsourcing and specialization. I challenge the belief that we’re getting along fine. I argue systems are headed to failure, because of over optimization that prioritized output over resilience.
I still read the LLMs output quite critically and I cringe whenever I do. LLMs are just plain wrong a lot of the time. They’re just not very intelligent. They’re great at pretending to be intelligent. They imitate intelligence. That is all they do. And I can see it every single time I interact with them. And it terrifies me that others aren’t quite as objective.
I usually feed my articles to it and ask for insight into whats working. I usually wait to initiate any sort of AI insight until my rough draft is totally done...
Working in this manner, it is so painfully clear it doesnt really follow the flow of the article even. It misses on so many critical details and just sorta fills in its own blanks wrong... When you tell it that its missing a critical detail, it treats you like some genius, every single time.
It is hard for me to try to imagine growing up with it, and using it to write my own words for me. The only time i copy paste words to a fellow human that is ai generated, is for totally generic customer service style replies, for questions i dont totally consider worthy of any real time.
AI has kinda taken away my flow state for coding, rare as it was... I still get it when writing stuff I am passionate about, and I can't imagine I'll ever wanna outsource that.
> When you tell it that its missing a critical detail, it treats you like some genius, every single time.
Yeah, or as I say, Uriah Heep.
To be fair, telling everybody they are geniuses is the obvious next step after participation awards.
Because people have figured out that participation awards are worthless, so let's give them all first place.
> And it terrifies me that others aren’t quite as objective.
I have been reminded constantly throughout this that a very large fraction of people are easily impressed by such prose. Skill at detecting AI output (in any given endeavour), I think, correlates with skill at valuing the same kind of work generally.
Put more bluntly: slop is slop, and it has been with us for far longer than AI.
Interesting read..
To his point: personally, I find it shifts 'where and when' I have to deal with the 'cognitive load'. I've noticed (at times) feeling more impatient, that I tend to skim the results more often, and that it takes a bit more mental energy to maintain my attention..
Distributed verification. 8 billions of us can divide up the topics and subjects and pool together our opinions and best conclusions.
What is that saying again, a person is smart, a group is dumb?
That's the risk involved with opinions and conclusions.
"A person is smart, people are dumb." I heard this for the first time from Men in Black, lol.
A lot of this stuff depends on how a person chooses to engage, but my contrarian take is that actually throughout history whenever anyone said X technology will lead to the downfall of humanity for y reasons, that take was usually correct.
The article he references gives this example:
“Is it lazy to watch a movie instead of making up a story in your head?”
Yes, yes it is, this was a worry when we transitioned from oral culture to written culture, and I think it was probably prescient.
For many if not most people cultural or technological expectations around what skills you _have_ to learn probably have an impact on total capability. We probably lost something when Google Maps came out and the average person didn’t have to learn to read a map.
When we transitioned from paper and evening news to 24 hour partisan cable news, I think more people outsourced their political opinions to those channels.
See Scott Alexander’s The Whispering Earring (2012):
https://gwern.net/doc/fiction/science-fiction/2012-10-03-yva...
Wasn't there a follow-up to this where Scott denied that the story was "about" the obvious thing for it to be about?
Social media has given me a rather dim view of the quality of people's thinking, long before AI. Outsourcing it could well be an improvement.
> Social media has given me a rather dim view of the quality of people's thinking, long before AI. Outsourcing it could well be an improvement.
Cogito, ergo sum
The corollary is: absence of thinking equals non-existence. I don't see how that can be an improvement. Improvement can happen only when it's applied to the quality of people's thinking.
The converse need not hold. Cognition implies existence; it is sufficient but not necessary. Plenty of things exist without thinking.
(And that's not what the Cogito means in the first place. It's a statement about knowledge: I think therefore it is a fact that I am. Descartes is using it as the basis of epistemology; he has demonstrated from first principles that at least one thing exists.)
I know the trivialities. I didn't intend to make a general or formal statement, we're talking about people. In a competitive world, those who've been reduced to idiocracy won't survive, AI not only isn't going to help them, it will be used against them.
> Plenty of things exist without thinking.
Existence in an animal farm isn't human existence.
Thinking developed naturally as a tool that helps our species to stay dominant on the planet, at least on land. (Not by biomass but by the ability to control.)
If outsourcing thought is beneficial, those who practice it will thrive; if not, they will eventually cease to practice it, one way or another.
Thought, as any other tool, is useful when it solves more problems than it creates. For instance, an ability to move very fast may be beneficial if it gets you where you want to be, and detrimental, if it misses the destination often enough, and badly enough. Similarly, if outsourced intellectual activities miss the mark often enough, and badly enough, the increased speed is not very helpful.
I suspect that the best results would be achieved by outsourcing relatively small intellectual acts in a way that guarantees very rare, very small errors. That is, AI will become useful when AI becomes dependable, comparable to our other tools.
> If outsourcing thought is beneficial, those who practice it will thrive
It makes them prey to and dependent on those who are building and selling them the thinking.
> I suspect that the best results would be achieved by outsourcing relatively small intellectual acts in a way that guarantees very rare, very small errors. That is, AI will become useful when AI becomes dependable, comparable to our other tools.
That's like saying ultra processed foods provide the best results when eaten sparingly, so it will become useful when people adopt overall responsible diets. Okay, sure, but what does that matter in practice since it isn't happening?
Outsourcing thinking is not a skill. It is the same as skipping gym. Nothing to practice here
A lot of people practice not going to a gym! I bet it reflects e.g. on their dating outcomes, at least statistically.
I suspect that outsourcing thinking may reflect on quite some outcomes, too. We just need time to gather the statistics.