I very much disagree. To attempt a proof by contradiction:
Let us assume that the author's premise is correct, and LLMs are plenty powerful given the right context. Can an LLM recognize the context deficit and frame the right questions to ask?
They can not: LLMs have no ability to understand when to stop and ask for directions. They routinely produce contradictions, fail simple tasks like counting the letters in a word etc. etc. They can not even reliably execute my "ok modify this text in canvas" vs "leave canvas alone, provide suggestions in chat, apply an edit once approved" instructions.
This is not a proof by contradiction - you have stated an assumption followed by a bunch of non-sequitors about what LLMs can and can't do, also known as begging the question. Under the conditions of your assumption (namely that LLMs are plenty powerful with the right context) why would you believe anything in your last paragraph? That's how a proof by contradiction works.
(not saying you are wrong, necessarily, but I don't think this argument holds water)
I agree it isn't really proof by contradiction. It is more like proof by demonstration of concrete failures in real life demonstrations, which is stronger.
It is like the author is saying 12 is a prime number and I am like but I divided it by 2 just the other day.
Nit pick, but proof by contradiction is necessarily stronger as it is deductive reasoning, and this kind of "proof" by anecdotal evidence doesn't rise above abductive reasoning. Still useful, very much not a proof.
True, but in this case these are hardly globally applicable facts about LLM-based systems (not nearly to the same degree as "12 divides 2" anyway). Different systems have different properties on all those fronts.
I don't think no argument is the right substitute for a bad one!
It feels crazy to keep arguing about LLMs being able to do this or that, but not mention the specific model? The post author only mentions the IMO gold-medal model. And your post could be about anything. Am I to believe that the two of you are talking about the same thing? This discussion is not useful if that’s not the case.
This depends on whether you mean LLMs in the sense of single shot, or LLMs + software built around it. I think a lot of people conflate the two.
In our application e use a multi-step check_knowledge_base workflow before and after each LLM request. Pretty much, make a separate LLM request to check the query against the existing context to see if more info is needed, and a second check after generation to see if output text exceeded it's knowledge base.
And the results are really good. Now coding agents in your example are definitely stepwise more complex, but the same guardrails can apply.
> Pretty much, make a separate LLM request to check the query against the existing context to see if more info is needed, and a second check after generation to see if output text exceeded it's knowledge base.
They are unreliable at that. They can't reliably judge LLM outputs without access to the environment where those actions are executed and sufficient time to actually get to the outcomes that provide feedback signal.
For example I was working on evaluation for an AI agent. The agent was about 80% correct, and the LLM judge about 80% accurate in assessing the agent. How can we have self correcting AI when it can't reliably self correct? Hence my idea - only the environment outcomes over a sufficient time span can validate work. But that is also expensive and risky.
For example, the article above was insightful. But the authors pointing to 1,000s of disparate workflows that could be solved with the right context, without actually providing 1 concrete example of how he accomplishes this makes the post weaker.
> It’s because the bottleneck isn’t in intelligence, but in human tasks: specifying intent and context engineering.
So the bottleneck is intelligence.
Junior engineers are intelligent enough to understand when they don't understand. They interrogate the intent and context of the tasks they are given. This is intelligence.
Solving math questions is not intelligence, computers have been better than humans at that for like 100 years, as long as you first do the intelligent part as a human: specifying the task formally.
Now we just have computer programs with another kind of input in natural language, and which require dozens of gigabytes of video ram and millions of cores to execute. And we still have to have humans to the intelligent part, figure out how to describe the problem so the dumb but very very fast machine can answer the question.
This article is insightful, but I blinked when I saw the headline “Reducing the human bottleneck” used without any apparent irony.
At some point we should probably take a step back and ask “Why do we want to solve this problem?” Is a world where AI systems are highly intelligent tools, but humans are needed to manage the high level complexity of the real world… supposed to be a disappointing outcome?
Assuming you buy the idea of a post scarcity society and assuming we can separate our long ingrained notion that spending your existence in toil to survive is a moral imperative and not working is deserving of punishment if not death, I personally look forward to a time we can get off the hamster wheel. Most buttons that get pushed by people are buttons not worth spending your existence pushing. This includes an awful lot of “knowledge work,” which is often better paid but more insidious in that it requires not just your presence but capturing your entire attention and mind inside and outside work. I would also be hopeful that fertility rates would decline and there would simply be far fewer humans.
In Asimov’s robots stories the spacers are long lived and low population because robots do most everything. He presents this as a dead end, that stops us from conquering the galaxy. This to me sounds like a feature not a bug. I think human existence could be quite good with large scale automation, fewer people, and less suffering due to the necessity for everyone to be employed.
Note I recognize you’re not saying exactly the same thing as I’m saying. I think humans will never cede full executive control by choice at some level. But I suspect, sadly, power will be confined to those few who do get to manage the high level complexity of the real world.
We will never have a post scarcity society. Automation can make certain foodstuffs and manufactured goods somewhat cheaper but the things that people really want will always be in short supply, for example real estate in geographically favorable areas.
There will always be scarcity for goods whose value is derived from their scarcity.
Maybe food won't be scarce (we wre actually very close to that) and shelter may not be scarce but, even if you invent the replicator, there will still be things that are bespoke.
When the celibate classes have been able to sublimate what is arguably the strongest of all wants for as long as they have, I doubt there is any desire that could not be redirected with similar techniques.
This assumes that the celibate was actually maintained, not pretended and secretly violated. There is plenty of evidence that those who were intended to preserve celibate in medieval times actually did not.
I have never understood "post scarcity" to mean the end of ALL scarcity, which is essentially impossible by definition.
Relative to 500 years ago, we have already nearly achieved post-scarcity for a few types of items, like basic clothing.
It seems this is yet another concept for which we need to adjust our understanding from binary to a spectrum, as we find our society advancing along the spectrum, in at least some aspects.
Also for basic food. You can get all the rice and beans you really need for basically no money. That means actual starvation is nowadays a political not a resource issue
We can automate plenty in physiological needs, and in fact have already. There's plenty of food and housing for everyone to have them, but a bunch of people will immediately destroy them if provided with such. I don't think "Dispose of a full house every 3 months" will ever be practical, but we might be able to "solve" physiological needs.
Safety needs might be possible to solve. Totalitarian states with ubiquitous panopticons can leave you "safe" in a crime sense, and AI gaslighting and happy pills will make you "feel" safe.
Love and belonging we have "Plenty" of already - If you're looking for your people, you can find them. Plenty aren't willing to look.
But once you get up to Esteem, it all falls apart. Reputation and Respect are not scalable. There will always be a limited quantity of being "The Best" at anything, and many are not willing to be "The Best" within tight constraints; There's always competition. You can plausibly say that this category is inherently competitive. There's no respect without disrespect. There's no best if there's no second best, and second best is first loser. So long as humans interact with each other - So long as we're not each locked in our own private shards of reality - There will be competition, and there will be those that fall short.
Self Actualization is almost irrelevant at this point. It falls into exactly the same as the above. You can simulate a reality where someone is always the best at whatever they decide to so, but I think it will inherently feel hollow. Agent Smith said it best: https://youtu.be/9Qs3GlNZMhY?t=23
> There will always be a limited quantity of being "The Best" at anything
Still, to pick a simple example, we do have different sports at which different people are "The Best". One solution would be to multiple the categories, which I feel is already happening to some extent with all the computer games or niche artistical trends.
And I would claim that very few people are "The Best", it's mostly about not being "the worst" at everything you are involved in.
Do you really want to live in this "post scarcity" world? With no effort required to meet your needs and desires, what motivation will you have to do anything?
Kaczynski's warnings seem more apt with every year that passes.
I want to live in the post scarcity world. Given that we are headed into an ultra-productive world, I prefer by miles a world without scarcity over a world full of scarcity because the elites are hoarding the resources, and the only way to provide for oneself is by outcompeting the machines that already produce at zero marginal price, but only for the elites.
Kaczynski didn't invent any of these ideas, or even develop them, instead of citing him, why not cite... Literally any other person with them whose mind wasn't blown out by LSD and a desire to commit random political murder.
You're doing your point a disservice by bringing in all of that baggage.
Perhaps there are more original or precise sources for the ideas. I've read Jacques Ellul, for example, but for someone not well versed in philosophy like myself, Kaczynski is more accessible and well known.
I don't agree with many of his conclusions or actions, but I have no problem judging the good ideas he advocated on their own merit.
it actually doesn't matter what we want. Because eliminating it will in long run increase yield, economic forces will automate humans away by capitalistic forces.
We should stop considering it a given that capitalistic forces will do this and start considering how we build systems that optimize for the maximum amount of human good rather than the maximum amount of abstract economic good (which nowadays usually means an increase in wealth disparity).
This is correct. It will require non-market forces to regulate soft-landings for humans. We may see a wave of "job-preserving" legislation in the coming years but these will eventually be washed away in favor of taxing the AI economy.
Same same human problems. Regardless of their inherent intelligence...humans perform well only when given decent context and clear specifications/data. If you place a brilliant executive into a scenario without meaningful context.... an unfamiliar board meeting where they have no idea of the company’s history, prior strategic discussions, current issues, personel dynamics...expectations..etc etc, they will struggle just as a model does surly. They may still manage something reasonably insightful, leveraging general priors, common sense, and inferential reasoning... their performance will never match their potential had they been fully informed of all context and clearly data/objectives. I think context is the primary primitive property of intelligent systems in general?
A human will struggle, but they will recognize the things they need to know, and seek out people who may have the relevant information. If asked "how are things going" they will reliably be able to say "badly, I don't have anything I need".
I really like this analogy! Many real-world tasks that we'd like to use AI for seem infinitely more complex than can be captured in a simple question/prompt. The main challenge going forward, in my opinion, is how to let LLMs ask the right questions – query for the right information – given a task to perform. Tool use with MCPs might be a good start, though it still feels hacky to have to define custom tools for LLMs first, as opposed to how humans effectively browse and skim lots of documentation to find actually relevant bits.
This comparison may make sense on short-horizon tasks for which there is no possibility of preparation. Given some weeks to prepare, a good human executive will get the context, while today's best AI systems will completely fail to do so.
Today’s AI systems probably won’t excel, but they won’t completely fail either.
Basically give the LLM a computer to do all kinds of stuff against the real world, kick it off with a high level goal like “build a startup”.
The key is to instruct it to manage its own memory in its computer, and when context limit inevitably approaches, programmatically interrupt the LLM loop and instruct it to jot down everything it has for its future self.
It already kinda works today, and I believe AI systems a year from now will excel at this:
> I think context is the primary primitive property of intelligent systems in general?
What do you mean by 'context' in this context? As written, I believe that I could knock down your claim by pointing out that there exist humans who would do catastrophically poorly at a task that other humans would excel at, even if both humans have been fully informed of all of the same context.
> I think wood is the primary primitive property of sawmills in general.
An obvious observation would be that it is dreadfully difficult to produce the expected product of a sawmill without tools to cut or sand or otherwise shape the wood into the desired shapes.
One might also notice that while a sawmill with no wood to work on will not produce any output, a sawmill with wood but without woodworking tools is vanishingly unlikely to produce any output... and any it does manage to produce is not going to be good enough for any real industrial purpose.
Author IMO correctly recognizes that access to context needs to scale (“latent intent” which I love), but I’m not sure I’m convinced that current models will be effective even if given access to all priors needed for a complex task. The ability to discriminate valuable from extraneous context will need to scale with size of available context, it will be pulling needles from haystacks that aren’t straightforward similarity. I think we will need to steer these things.
It's specific model that run for maths.
GPT-5 and Gemini 2.5 still cannot compute an arbitrary length sum of whole number without a calculator.
I have a proceduraly generated benchmark of basic operations, LLMs gets better at it with time, but they cant still solve basic maths or logic problems.
BTW I'm open to selling it, my email is on my hn profile.
I can't see why that's necessary, when it can call a tool.
Everyone uses a calculator.
A logic problem, it can solve with reasoning, perhaps it's not the smartest but it can solve logic problems. All indications are that it will continue to become smarter.
Have you ever seen what these arbitrary length whole numbers look like once they are tokenized? They don't break down to one-digit-per-token, and the same long number has no guarantee of breaking down into tokens the same way every time it is encountered.
But the algorithms they teach humans in school to do long-hand arithmetic (which are liable to be the only algorithms demonstrated in the training data) require a single unique numeral for every digit.
This is the same source as the problem of counting "R"'s in "Strawberry".
That's was the initial thinking of anyone which I explained this, it was also my speculation, but when you look in it's reasoning where it do the mistake, it correctly extract the digits out of the input token.
As I say in another comments, most of the mistakes her happen when it recopy the answer it calculated from the summation table.
You can avoid tokenization issue when it extract the answer by making it output an array of digits of the answer, it will still fail at simply recopying the correct digit.
I recently saw someone that posted a leaked system prompt for GPT5 (and regardless of the truth of the matter since I can't confirm the authenticity of the claim, the point I'm making stands alone to some degree).
A portion of the system prompt was specifically instructing the LLM that math problems are, essentially, "special", and that there is zero tolerance for approximation or imprecision with these queries.
To some degree I get the issue here. Most queries are full of imprecision and generalization, and the same type of question may even get a different output if asked in a different context, but when it comes to math problems, we have absolutely zero tolerance for that. To us this is obvious, but when looking from the outside, it is a bit odd that we are so loose and sloppy with, well basically everything we do, but then we put certain characters in a math format, and we are hyper obsessed with ultra precision.
The actual system prompt section for this was funny though. It essentially said "you suck at math, you have a long history of sucking at math in all contexts, never attempt to do it yourself, always use the calculation tools you are provided."
i'd wager your benchmark problems require cumbersome arithmetic or are poorly worded / inadequately described. or, you're mislabeling them as basic math and logic (a domain within which LLMs have proven their strengths!)
i only call this out because you're selling it and don't hypothesize* on why they fail your simple problems. i suppose an easily aced bench wouldn't be very marketable
This is a simple sum of 2 whole number, the number are simply big.
Most of the time they make a correct summation table but fail to copy correctly the sum result into a final result.
That is not a tokenisation problem (you can change the output format to make sure of it).
I have a separated benchmark that test specifically this, when the input is too large, the LLMs fails to accuratly copy the correct token.
I suppose the positional embedding, are not perfectly learned and it sometimes cause a mistake.
The prompt is quite short, it use structured output, and I can generate a nice graph of % of good response accross difficulity of the question (which is just the total digit count of the input numbers.
LLMs have 100% success rate on theses sum until they reach a frontier, past that their accuracy collapse at various speed depending of the model.
> GPT-5 and Gemini 2.5 still cannot compute an arbitrary length sum of whole number without a calculator.
Neither can many humans, including some very smart ones. Even those who can will usually choose to use a calculator (or spreadsheet or whatever) rather than doing the arithmetic themselves.
I'm not the fellow you replied to, but I felt like stepping in.
> That’s interesting, you added a tool.
The "tool" in this case, is a memory aid. Because they are computer programs running inside a fairly-ordinary computer, the LLMs have exactly the same sort of tool available to them. I would find a claim that LLMs don't have a free MB or so of RAM to use as scratch space for long addition to be unbelievable.
1) GPT-5 is advertised as "PhD-level intelligence". So, I take OpenAI (and anyone else who advertises their bots with language like this) at their word about the bot's capabilities and constrain the set of humans I use for comparison to those who also have PhD-level intelligence.
2) Any human who has been introduced to long addition will absolutely be able to compute the sum of two whole numbers of arbitrary length. You may have to provide them a sufficiently strong incentive to actually do it long-hand, but they absolutely are capable because the method is not difficult. I'm fairly certain that most adult humans [0] (regardless of whether or not they have PhD-level intelligence) find the method to be trivial, if tedious.
The bottleneck for automation is verification. With human work, verification was fast(er) because you know where to look with certain assumptions that your upstream tasker would not have made trivial mistakes. For automation, AI needs to verify it's own work, review, and self correct to be able to automate any given work. Where this works, it will also change the abstraction layer compared to what it is today. The problem is same with every automation promise - it needs to work reliably at say 95% or 99% times and when it doesn't, there should be human contingency in terms of what to look for. Considering coding as the first example: it's already underway. AI generates the code, the test cases, and then verifies if the code works as intended. Code has a built in verification layer (both compiler and unit tests). High probablity the other domains move towards something similar too. I would also say the model needs to be intelligent to course correct when the output isn't validated[1].
Verification solves the human in the loop dependency both for AI and human tasks. All the places where we could automate in the past, there were clearly quality checks which ensured the machinery were working as expected. Same thing will be replicated with AI too.
Disclaimer: I have been working on building a universal verifier for AI tasks. The way it works is you give it a set of rules (policy) + AI output (could be human output too) and it outputs a scalar score + clause level citations. So I have been thinking about the problem space and might be over rating this. Would welcome contrarian ideas. (no, it's not llm as a judge)
[1]: Some people may call it environment based learning, but in ML terms i feel it's different. That woudl be another example of sv startups using technical terms to market themselves when they dont do what they say.
Verification is the bottleneck, not ideation. LLMs can generate anything on tap, but solving any non-trivial problem requires iteration between thinking, doing and observing outcomes. The real world is too complex to be simulated by AI or humans. The scientific method works the same way, we are not exempt from having to validate our ideas. But as humans we have better feedback and access to context and we can assume risks on our own. AI has no skin and bears no responsibility.
So the missing ingredient for AI is access to environment for feedback learning. It has little to do with AI architecture or datasets. I think a huge source of such data is our human-LLM chat logs. We act as LLM eyes, hands and feed on the ground. We carry the tacit knowledge and social context. OpenAI reports billions of tasks per day, probably trillions of tokens of interactive language combining human, AI and feedback from the environment. Maybe this is how AI can inch towards learning how to solve real world problems, it is part of the loop of problem solving, and benefits from having this data for training.
I think the framing of these models are being "intelligent" is not the right way to go. They've gotten better at recall and association.
They can recall prior reasoning from text they are trained on which allows them to handle complex tasks that have been solved before, but when working on complex, novel, or nuanced tasks there is no high quality relevant training data to recall.
Intelligence has always been a fraught word to define and I don't think what LLMs do is the right attribute for defining it.
I agree with a good deal of the article but because it keeps using loaded works like "intelligent" and "smarter", it has a hard time explaining what's missing.
This is because we tend to use a human-centric reference to evaluate the difficulty of a task : playing chess at grand master level is a lot harder than folding laundry, except that it is the opposite, and this weird bias is well known as Moravec’s Paradox.
Intelligence is the bottleneck, but not the kind of intelligence you need to solve puzzles.
Providing more context is difficult for a number of reasons. If you do it RAG style you need to know which context is relevant. LLMs are notorious for knowing that a factor is relevant if directly asked about that factor, but not bringing it up if it's implicit. In business things like people's feelings on things, historical business dealings, relevance to trending news can all be factors. If you fine tune... well... there have been articles recently about fine tuning on specific domains causing overall misalignment. The more you fine tune, the riskier.
To be able to reason about the rules of a game so trivial that it has been solved for ages, so that it can figure out enough strategy to never not bring the game to a draw (if played against one who is playing to not lose), or a win (if played against someone who is leaving the bot an opening to win), as mentioned in [0] and probably a squillion other places?
Speaking of human-level capabilities, it looks like I totally failed to correctly read the section of your comment that I quoted. Shame on me.
However, I'd expect that "Appearing to fail to reason well enough to know how to always fail to lose, and -if the opportunity presents itself- win at one of the simplest games there is." is absolutely not a desired outcome for OpenAI, or any other company that's burning billions of dollars producing LLMs.
If their robot was currently reliably capable of adequate performance at Tic Tac Toe, it absolutely would be exhibiting that behavior.
Ah okay, well it will still lose some of the time, which is surprising. And it will lose in surprising way, e.g., thinking for 14 seconds and then making an extremely basic mistake like not seeing it already have two on a row and could just win.
.. and you can "program" a neural network — so simple it can be implemented by boxes full of marbles and simple rules about how to interact with the boxes — to learn by playing tictactoe until it always plays perfect games. This is frequently chosen as a lesson in how neural network training even works.
But I have a different challenge for you: train a human to play tictactoe, but never allow them to see the game visually, even in examples. You have to train them to play only by spoken words.
Point being that tictactoe is a visual game and when you're only teaching a model to learn from the vast sea of stream-of-tokens (similar to stream-of-phonemes) language, visual games like this aren't going to be well covered in the training set, nor is it going to be easy to generalize to playing them.
Well whatever your story is, I know with near certainty that no amount of scaffolding is going to get you from an LLM that can't figure out tic-tac-toe (but will confidently make bad moves) to something that can replace a human in an economically important job.
I very much disagree. To attempt a proof by contradiction:
Let us assume that the author's premise is correct, and LLMs are plenty powerful given the right context. Can an LLM recognize the context deficit and frame the right questions to ask?
They can not: LLMs have no ability to understand when to stop and ask for directions. They routinely produce contradictions, fail simple tasks like counting the letters in a word etc. etc. They can not even reliably execute my "ok modify this text in canvas" vs "leave canvas alone, provide suggestions in chat, apply an edit once approved" instructions.
This is not a proof by contradiction - you have stated an assumption followed by a bunch of non-sequitors about what LLMs can and can't do, also known as begging the question. Under the conditions of your assumption (namely that LLMs are plenty powerful with the right context) why would you believe anything in your last paragraph? That's how a proof by contradiction works.
(not saying you are wrong, necessarily, but I don't think this argument holds water)
I agree it isn't really proof by contradiction. It is more like proof by demonstration of concrete failures in real life demonstrations, which is stronger.
It is like the author is saying 12 is a prime number and I am like but I divided it by 2 just the other day.
Nit pick, but proof by contradiction is necessarily stronger as it is deductive reasoning, and this kind of "proof" by anecdotal evidence doesn't rise above abductive reasoning. Still useful, very much not a proof.
True, but in this case these are hardly globally applicable facts about LLM-based systems (not nearly to the same degree as "12 divides 2" anyway). Different systems have different properties on all those fronts.
I don't think no argument is the right substitute for a bad one!
Claude routinely stops and asks me clarifying questions before continuing, especially when the given extended thinking or doing research.
Indeed, the ability to do so seems to depend more on how well your system prompt is laying out that workflow, than how "intelligent" the model is.
It feels crazy to keep arguing about LLMs being able to do this or that, but not mention the specific model? The post author only mentions the IMO gold-medal model. And your post could be about anything. Am I to believe that the two of you are talking about the same thing? This discussion is not useful if that’s not the case.
This depends on whether you mean LLMs in the sense of single shot, or LLMs + software built around it. I think a lot of people conflate the two.
In our application e use a multi-step check_knowledge_base workflow before and after each LLM request. Pretty much, make a separate LLM request to check the query against the existing context to see if more info is needed, and a second check after generation to see if output text exceeded it's knowledge base.
And the results are really good. Now coding agents in your example are definitely stepwise more complex, but the same guardrails can apply.
> Pretty much, make a separate LLM request to check the query against the existing context to see if more info is needed, and a second check after generation to see if output text exceeded it's knowledge base.
They are unreliable at that. They can't reliably judge LLM outputs without access to the environment where those actions are executed and sufficient time to actually get to the outcomes that provide feedback signal.
For example I was working on evaluation for an AI agent. The agent was about 80% correct, and the LLM judge about 80% accurate in assessing the agent. How can we have self correcting AI when it can't reliably self correct? Hence my idea - only the environment outcomes over a sufficient time span can validate work. But that is also expensive and risky.
if an llm is unreliable, then why would another just-as-unreliable llm make it any better?
Do you have a concrete example of what you mean?
For example, the article above was insightful. But the authors pointing to 1,000s of disparate workflows that could be solved with the right context, without actually providing 1 concrete example of how he accomplishes this makes the post weaker.
> It’s because the bottleneck isn’t in intelligence, but in human tasks: specifying intent and context engineering.
So the bottleneck is intelligence.
Junior engineers are intelligent enough to understand when they don't understand. They interrogate the intent and context of the tasks they are given. This is intelligence.
Solving math questions is not intelligence, computers have been better than humans at that for like 100 years, as long as you first do the intelligent part as a human: specifying the task formally.
Now we just have computer programs with another kind of input in natural language, and which require dozens of gigabytes of video ram and millions of cores to execute. And we still have to have humans to the intelligent part, figure out how to describe the problem so the dumb but very very fast machine can answer the question.
I truly love this comment, which essentially says: LLMs are glorified calculators, with ambiguous grammar. :)
This article is insightful, but I blinked when I saw the headline “Reducing the human bottleneck” used without any apparent irony.
At some point we should probably take a step back and ask “Why do we want to solve this problem?” Is a world where AI systems are highly intelligent tools, but humans are needed to manage the high level complexity of the real world… supposed to be a disappointing outcome?
Assuming you buy the idea of a post scarcity society and assuming we can separate our long ingrained notion that spending your existence in toil to survive is a moral imperative and not working is deserving of punishment if not death, I personally look forward to a time we can get off the hamster wheel. Most buttons that get pushed by people are buttons not worth spending your existence pushing. This includes an awful lot of “knowledge work,” which is often better paid but more insidious in that it requires not just your presence but capturing your entire attention and mind inside and outside work. I would also be hopeful that fertility rates would decline and there would simply be far fewer humans.
In Asimov’s robots stories the spacers are long lived and low population because robots do most everything. He presents this as a dead end, that stops us from conquering the galaxy. This to me sounds like a feature not a bug. I think human existence could be quite good with large scale automation, fewer people, and less suffering due to the necessity for everyone to be employed.
Note I recognize you’re not saying exactly the same thing as I’m saying. I think humans will never cede full executive control by choice at some level. But I suspect, sadly, power will be confined to those few who do get to manage the high level complexity of the real world.
We will never have a post scarcity society. Automation can make certain foodstuffs and manufactured goods somewhat cheaper but the things that people really want will always be in short supply, for example real estate in geographically favorable areas.
With a stable population, post scarcity is surely possible technically. Just invest resource into improving everything that already exists.
I also agree that we will never have a post scarcity society; but this is more about humanity than technology.
There will always be scarcity for goods whose value is derived from their scarcity.
Maybe food won't be scarce (we wre actually very close to that) and shelter may not be scarce but, even if you invent the replicator, there will still be things that are bespoke.
When the celibate classes have been able to sublimate what is arguably the strongest of all wants for as long as they have, I doubt there is any desire that could not be redirected with similar techniques.
This assumes that the celibate was actually maintained, not pretended and secretly violated. There is plenty of evidence that those who were intended to preserve celibate in medieval times actually did not.
Indeed, scarcity will be artificially created if it's not naturally present. The human need to have something that others do not is strong.
I have never understood "post scarcity" to mean the end of ALL scarcity, which is essentially impossible by definition.
Relative to 500 years ago, we have already nearly achieved post-scarcity for a few types of items, like basic clothing.
It seems this is yet another concept for which we need to adjust our understanding from binary to a spectrum, as we find our society advancing along the spectrum, in at least some aspects.
Also for basic food. You can get all the rice and beans you really need for basically no money. That means actual starvation is nowadays a political not a resource issue
We can automate plenty in physiological needs, and in fact have already. There's plenty of food and housing for everyone to have them, but a bunch of people will immediately destroy them if provided with such. I don't think "Dispose of a full house every 3 months" will ever be practical, but we might be able to "solve" physiological needs.
Safety needs might be possible to solve. Totalitarian states with ubiquitous panopticons can leave you "safe" in a crime sense, and AI gaslighting and happy pills will make you "feel" safe.
Love and belonging we have "Plenty" of already - If you're looking for your people, you can find them. Plenty aren't willing to look.
But once you get up to Esteem, it all falls apart. Reputation and Respect are not scalable. There will always be a limited quantity of being "The Best" at anything, and many are not willing to be "The Best" within tight constraints; There's always competition. You can plausibly say that this category is inherently competitive. There's no respect without disrespect. There's no best if there's no second best, and second best is first loser. So long as humans interact with each other - So long as we're not each locked in our own private shards of reality - There will be competition, and there will be those that fall short.
Self Actualization is almost irrelevant at this point. It falls into exactly the same as the above. You can simulate a reality where someone is always the best at whatever they decide to so, but I think it will inherently feel hollow. Agent Smith said it best: https://youtu.be/9Qs3GlNZMhY?t=23
> There will always be a limited quantity of being "The Best" at anything
Still, to pick a simple example, we do have different sports at which different people are "The Best". One solution would be to multiple the categories, which I feel is already happening to some extent with all the computer games or niche artistical trends.
And I would claim that very few people are "The Best", it's mostly about not being "the worst" at everything you are involved in.
Do you really want to live in this "post scarcity" world? With no effort required to meet your needs and desires, what motivation will you have to do anything?
Kaczynski's warnings seem more apt with every year that passes.
I want to live in the post scarcity world. Given that we are headed into an ultra-productive world, I prefer by miles a world without scarcity over a world full of scarcity because the elites are hoarding the resources, and the only way to provide for oneself is by outcompeting the machines that already produce at zero marginal price, but only for the elites.
Look, even a stopped clock is right twice a day.
Kaczynski didn't invent any of these ideas, or even develop them, instead of citing him, why not cite... Literally any other person with them whose mind wasn't blown out by LSD and a desire to commit random political murder.
You're doing your point a disservice by bringing in all of that baggage.
Perhaps there are more original or precise sources for the ideas. I've read Jacques Ellul, for example, but for someone not well versed in philosophy like myself, Kaczynski is more accessible and well known.
I don't agree with many of his conclusions or actions, but I have no problem judging the good ideas he advocated on their own merit.
>Do you really want to live in this "post scarcity" world?
Yes.
>Kaczynski
You're citing a psychopathic terrorist who murdered 3 people and injured a further 23.
>what motivation will you have to do anything?
For one thing, freedom from self-appointed taskmasters who view Kaczynski as a source of inspiration.
it actually doesn't matter what we want. Because eliminating it will in long run increase yield, economic forces will automate humans away by capitalistic forces.
We should stop considering it a given that capitalistic forces will do this and start considering how we build systems that optimize for the maximum amount of human good rather than the maximum amount of abstract economic good (which nowadays usually means an increase in wealth disparity).
This is correct. It will require non-market forces to regulate soft-landings for humans. We may see a wave of "job-preserving" legislation in the coming years but these will eventually be washed away in favor of taxing the AI economy.
If you don't have customers anymore, who are you selling your products to?
Same same human problems. Regardless of their inherent intelligence...humans perform well only when given decent context and clear specifications/data. If you place a brilliant executive into a scenario without meaningful context.... an unfamiliar board meeting where they have no idea of the company’s history, prior strategic discussions, current issues, personel dynamics...expectations..etc etc, they will struggle just as a model does surly. They may still manage something reasonably insightful, leveraging general priors, common sense, and inferential reasoning... their performance will never match their potential had they been fully informed of all context and clearly data/objectives. I think context is the primary primitive property of intelligent systems in general?
> they will struggle just as a model does surly
A human will struggle, but they will recognize the things they need to know, and seek out people who may have the relevant information. If asked "how are things going" they will reliably be able to say "badly, I don't have anything I need".
That's just additional context.
I really like this analogy! Many real-world tasks that we'd like to use AI for seem infinitely more complex than can be captured in a simple question/prompt. The main challenge going forward, in my opinion, is how to let LLMs ask the right questions – query for the right information – given a task to perform. Tool use with MCPs might be a good start, though it still feels hacky to have to define custom tools for LLMs first, as opposed to how humans effectively browse and skim lots of documentation to find actually relevant bits.
An intelligent system would know how to get that information without getting spoon fed it
This comparison may make sense on short-horizon tasks for which there is no possibility of preparation. Given some weeks to prepare, a good human executive will get the context, while today's best AI systems will completely fail to do so.
Today’s AI systems probably won’t excel, but they won’t completely fail either.
Basically give the LLM a computer to do all kinds of stuff against the real world, kick it off with a high level goal like “build a startup”.
The key is to instruct it to manage its own memory in its computer, and when context limit inevitably approaches, programmatically interrupt the LLM loop and instruct it to jot down everything it has for its future self.
It already kinda works today, and I believe AI systems a year from now will excel at this:
https://dwyer.co.za/static/claude-code-is-all-you-need.html
https://www.anthropic.com/research/project-vend-1
> I think context is the primary primitive property of intelligent systems in general?
What do you mean by 'context' in this context? As written, I believe that I could knock down your claim by pointing out that there exist humans who would do catastrophically poorly at a task that other humans would excel at, even if both humans have been fully informed of all of the same context.
To clarify what I'm thinking here by analogy...
Imagine that someone said:
> I think wood is the primary primitive property of sawmills in general.
An obvious observation would be that it is dreadfully difficult to produce the expected product of a sawmill without tools to cut or sand or otherwise shape the wood into the desired shapes.
One might also notice that while a sawmill with no wood to work on will not produce any output, a sawmill with wood but without woodworking tools is vanishingly unlikely to produce any output... and any it does manage to produce is not going to be good enough for any real industrial purpose.
Author IMO correctly recognizes that access to context needs to scale (“latent intent” which I love), but I’m not sure I’m convinced that current models will be effective even if given access to all priors needed for a complex task. The ability to discriminate valuable from extraneous context will need to scale with size of available context, it will be pulling needles from haystacks that aren’t straightforward similarity. I think we will need to steer these things.
We’re already steering, during pre-training (e.g. reasoning RLHF), as well as test-time (structured outputs, tool calls, agents…)
It's specific model that run for maths. GPT-5 and Gemini 2.5 still cannot compute an arbitrary length sum of whole number without a calculator. I have a proceduraly generated benchmark of basic operations, LLMs gets better at it with time, but they cant still solve basic maths or logic problems.
BTW I'm open to selling it, my email is on my hn profile.
I can't see why that's necessary, when it can call a tool. Everyone uses a calculator. A logic problem, it can solve with reasoning, perhaps it's not the smartest but it can solve logic problems. All indications are that it will continue to become smarter.
Have you ever seen what these arbitrary length whole numbers look like once they are tokenized? They don't break down to one-digit-per-token, and the same long number has no guarantee of breaking down into tokens the same way every time it is encountered.
But the algorithms they teach humans in school to do long-hand arithmetic (which are liable to be the only algorithms demonstrated in the training data) require a single unique numeral for every digit.
This is the same source as the problem of counting "R"'s in "Strawberry".
That's was the initial thinking of anyone which I explained this, it was also my speculation, but when you look in it's reasoning where it do the mistake, it correctly extract the digits out of the input token. As I say in another comments, most of the mistakes her happen when it recopy the answer it calculated from the summation table. You can avoid tokenization issue when it extract the answer by making it output an array of digits of the answer, it will still fail at simply recopying the correct digit.
I recently saw someone that posted a leaked system prompt for GPT5 (and regardless of the truth of the matter since I can't confirm the authenticity of the claim, the point I'm making stands alone to some degree).
A portion of the system prompt was specifically instructing the LLM that math problems are, essentially, "special", and that there is zero tolerance for approximation or imprecision with these queries.
To some degree I get the issue here. Most queries are full of imprecision and generalization, and the same type of question may even get a different output if asked in a different context, but when it comes to math problems, we have absolutely zero tolerance for that. To us this is obvious, but when looking from the outside, it is a bit odd that we are so loose and sloppy with, well basically everything we do, but then we put certain characters in a math format, and we are hyper obsessed with ultra precision.
The actual system prompt section for this was funny though. It essentially said "you suck at math, you have a long history of sucking at math in all contexts, never attempt to do it yourself, always use the calculation tools you are provided."
i'd wager your benchmark problems require cumbersome arithmetic or are poorly worded / inadequately described. or, you're mislabeling them as basic math and logic (a domain within which LLMs have proven their strengths!)
i only call this out because you're selling it and don't hypothesize* on why they fail your simple problems. i suppose an easily aced bench wouldn't be very marketable
This is a simple sum of 2 whole number, the number are simply big.
Most of the time they make a correct summation table but fail to copy correctly the sum result into a final result. That is not a tokenisation problem (you can change the output format to make sure of it). I have a separated benchmark that test specifically this, when the input is too large, the LLMs fails to accuratly copy the correct token. I suppose the positional embedding, are not perfectly learned and it sometimes cause a mistake.
The prompt is quite short, it use structured output, and I can generate a nice graph of % of good response accross difficulity of the question (which is just the total digit count of the input numbers.
LLMs have 100% success rate on theses sum until they reach a frontier, past that their accuracy collapse at various speed depending of the model.
Have you tried greedy decoding (temp 0) in aistudio?
The temp 0.7-1.0 defaults are not designed for reconstructing context with perfect accuracy.
> GPT-5 and Gemini 2.5 still cannot compute an arbitrary length sum of whole number without a calculator.
Neither can many humans, including some very smart ones. Even those who can will usually choose to use a calculator (or spreadsheet or whatever) rather than doing the arithmetic themselves.
Right but most (competent) humans will reliably use a calculator. It's difficult to get these to reliably make lots of tool calls like that.
I do think that competent humans can solve any arbitrary sum of 2 whole number with a pen, paper and time. LLMs can't do that.
That’s interesting, you added a tool. You did not just leave it to the human alone.
I'm not the fellow you replied to, but I felt like stepping in.
> That’s interesting, you added a tool.
The "tool" in this case, is a memory aid. Because they are computer programs running inside a fairly-ordinary computer, the LLMs have exactly the same sort of tool available to them. I would find a claim that LLMs don't have a free MB or so of RAM to use as scratch space for long addition to be unbelievable.
> Neither can many humans...
1) GPT-5 is advertised as "PhD-level intelligence". So, I take OpenAI (and anyone else who advertises their bots with language like this) at their word about the bot's capabilities and constrain the set of humans I use for comparison to those who also have PhD-level intelligence.
2) Any human who has been introduced to long addition will absolutely be able to compute the sum of two whole numbers of arbitrary length. You may have to provide them a sufficiently strong incentive to actually do it long-hand, but they absolutely are capable because the method is not difficult. I'm fairly certain that most adult humans [0] (regardless of whether or not they have PhD-level intelligence) find the method to be trivial, if tedious.
[0] And many human children!
The bottleneck for automation is verification. With human work, verification was fast(er) because you know where to look with certain assumptions that your upstream tasker would not have made trivial mistakes. For automation, AI needs to verify it's own work, review, and self correct to be able to automate any given work. Where this works, it will also change the abstraction layer compared to what it is today. The problem is same with every automation promise - it needs to work reliably at say 95% or 99% times and when it doesn't, there should be human contingency in terms of what to look for. Considering coding as the first example: it's already underway. AI generates the code, the test cases, and then verifies if the code works as intended. Code has a built in verification layer (both compiler and unit tests). High probablity the other domains move towards something similar too. I would also say the model needs to be intelligent to course correct when the output isn't validated[1].
Verification solves the human in the loop dependency both for AI and human tasks. All the places where we could automate in the past, there were clearly quality checks which ensured the machinery were working as expected. Same thing will be replicated with AI too.
Disclaimer: I have been working on building a universal verifier for AI tasks. The way it works is you give it a set of rules (policy) + AI output (could be human output too) and it outputs a scalar score + clause level citations. So I have been thinking about the problem space and might be over rating this. Would welcome contrarian ideas. (no, it's not llm as a judge)
[1]: Some people may call it environment based learning, but in ML terms i feel it's different. That woudl be another example of sv startups using technical terms to market themselves when they dont do what they say.
One thing that comes to mind: You still have to verify that the tests are exhaustive, and that the code isn't just gaming specific test scenarios.
I guess fuzzing and property-based testing could mitigate this to some extent.
For the square covering problem, answer, 2.
Verification is the bottleneck, not ideation. LLMs can generate anything on tap, but solving any non-trivial problem requires iteration between thinking, doing and observing outcomes. The real world is too complex to be simulated by AI or humans. The scientific method works the same way, we are not exempt from having to validate our ideas. But as humans we have better feedback and access to context and we can assume risks on our own. AI has no skin and bears no responsibility.
So the missing ingredient for AI is access to environment for feedback learning. It has little to do with AI architecture or datasets. I think a huge source of such data is our human-LLM chat logs. We act as LLM eyes, hands and feed on the ground. We carry the tacit knowledge and social context. OpenAI reports billions of tasks per day, probably trillions of tokens of interactive language combining human, AI and feedback from the environment. Maybe this is how AI can inch towards learning how to solve real world problems, it is part of the loop of problem solving, and benefits from having this data for training.
I think the framing of these models are being "intelligent" is not the right way to go. They've gotten better at recall and association.
They can recall prior reasoning from text they are trained on which allows them to handle complex tasks that have been solved before, but when working on complex, novel, or nuanced tasks there is no high quality relevant training data to recall.
Intelligence has always been a fraught word to define and I don't think what LLMs do is the right attribute for defining it.
I agree with a good deal of the article but because it keeps using loaded works like "intelligent" and "smarter", it has a hard time explaining what's missing.
This is because we tend to use a human-centric reference to evaluate the difficulty of a task : playing chess at grand master level is a lot harder than folding laundry, except that it is the opposite, and this weird bias is well known as Moravec’s Paradox.
Intelligence is the bottleneck, but not the kind of intelligence you need to solve puzzles.
For others who also hadn't heard of that: https://en.wikipedia.org/wiki/Moravec%27s_paradox
Providing more context is difficult for a number of reasons. If you do it RAG style you need to know which context is relevant. LLMs are notorious for knowing that a factor is relevant if directly asked about that factor, but not bringing it up if it's implicit. In business things like people's feelings on things, historical business dealings, relevance to trending news can all be factors. If you fine tune... well... there have been articles recently about fine tuning on specific domains causing overall misalignment. The more you fine tune, the riskier.
As a human, I'd also appreciate the specifications, documentation and meetings were not inaccessible to me.
It 100% is still intelligence. GPT-5 with Thinking still can't win at tic-tac-toe.
What if it's the desired outcome? Become more human-like (i.e. dumb) to make us feel better about ourselves? NI beats AI again!
> What if it's the desired outcome?
To be able to reason about the rules of a game so trivial that it has been solved for ages, so that it can figure out enough strategy to never not bring the game to a draw (if played against one who is playing to not lose), or a win (if played against someone who is leaving the bot an opening to win), as mentioned in [0] and probably a squillion other places?
Duh?
[0] <https://news.ycombinator.com/item?id=44919138>
Speaking of human-level capabilities, it looks like I totally failed to correctly read the section of your comment that I quoted. Shame on me.
However, I'd expect that "Appearing to fail to reason well enough to know how to always fail to lose, and -if the opportunity presents itself- win at one of the simplest games there is." is absolutely not a desired outcome for OpenAI, or any other company that's burning billions of dollars producing LLMs.
If their robot was currently reliably capable of adequate performance at Tic Tac Toe, it absolutely would be exhibiting that behavior.
Tic-tac-toe is solved and a draw can be forced 100% of the time...
That's exactly why it's so crazy that GPT-5 with Thinking still loses...
Ah, your first comment said "can't win". Which is different than "always loses".
Ah okay, well it will still lose some of the time, which is surprising. And it will lose in surprising way, e.g., thinking for 14 seconds and then making an extremely basic mistake like not seeing it already have two on a row and could just win.
.. and you can "program" a neural network — so simple it can be implemented by boxes full of marbles and simple rules about how to interact with the boxes — to learn by playing tictactoe until it always plays perfect games. This is frequently chosen as a lesson in how neural network training even works.
But I have a different challenge for you: train a human to play tictactoe, but never allow them to see the game visually, even in examples. You have to train them to play only by spoken words.
Point being that tictactoe is a visual game and when you're only teaching a model to learn from the vast sea of stream-of-tokens (similar to stream-of-phonemes) language, visual games like this aren't going to be well covered in the training set, nor is it going to be easy to generalize to playing them.
Well whatever your story is, I know with near certainty that no amount of scaffolding is going to get you from an LLM that can't figure out tic-tac-toe (but will confidently make bad moves) to something that can replace a human in an economically important job.