However, the way I'm seeing this is that a RL rollout may involve, say, 100 small decisions out of a pool of 1,000 possible decisions. Each training step, will slightly upregulate/downregulate a given training step in the step's condition. There will be uncertainty about which decision was helpful/harmful -- we only have 1 bit of information after all -- but this setup where many steps are slowly learned across many examples seems like it would lend itself well to generalization (e.g., instead of 1 bit in one context, you get a hundred 0.01 bit insights across 100 contexts). There may be some benefits not captured by comparing the number of bits relative to pretraining.
As the blog says, "Fewer bits, sure, but very valuable bits", this also seems like a different factor that would also be true. Learning these small decisions may be vastly more valuable for producing accurate outputs than learning through pretraining.
RL is very important - because while it's inefficient, and sucks at creating entirely new behaviors or features in LLMs, it excels at bringing existing features together and tuning them to perform well.
It's a bit like LLM glue. The glue isn't the main material - but it's the one that holds it all together.
It is the same type of learning, fundamentally: increasing/decreasing token probabilities based on the left context. RL simply provides more training data from online sampling.
Dwarkesh's blogging confuses me, because I am not sure if the message is free-associating, or, relaying information gathered.
ex. how this reads if it is free-associating: "shower thought: RL on LLMs is kinda just 'did it work or not?' and the answer is just 'yes or no', yes or no is a boolean, a boolean is 1 bit, then bring in information theory interpretation of that, therefore RL doesn't give nearly as much info as, like, a bunch of words in pretraining"
or
ex. how this reads if it is relaying information gathered: "A common problem across people at companies who speak honestly with me about the engineering side off the air is figuring out how to get more out of RL. The biggest wall currently is the cross product of RL training being slowww and lack of GPUs. More than one of them has shared with me that if you can crack the part where the model gets very little info out of one run, then the GPU problem goes away. You can't GPU your way out of how little info they get"
I am continuing to assume it is much more A than B, given your thorough sounding explanation and my prior that he's not shooting the shit about specific technical problems off-air with multiple grunts.
Dwarkesh has a CS degree, but zero academic training or real world experience in deep learning, so all of his blogging is just secondhand bullshitting to further siphon off a veneer of expertise from his podcast guests.
There's some insights there about the base rate of correct responses and pretraining to boost that. Basically searching a suboptimal versus optimal area of the model space at a suboptimal versus optimal rate.
I think the framing of the discussion in general is kind of misleading though, because it kind of avoids the question of "information inefficient about what?"
In RL, the model is becoming more informative about a stimulus-action-feedback space; in SL the model is becoming more informative about a stimulus-feedback space. RL is effectively "built for" searching a larger space.
In situations like the essay where you are directly comparing SL and RL, you're kind of saying for RL "the action space is restricted to dictionary X and the feedback space is binary yes or no" and for SL "the feedback space is restricted to dictionary X". So in a certain sense you're equating the RL action space to the SL feedback space.
In that case, maybe searching over suboptimal regions of the RL-action-SL-feedback space is inefficient. But the reason why, I think RL exists is because it generalizes to situations where the feedback and action space is bigger. Maybe you want to differentially associate different responses with different rewards, or sample a response space that is so large that you can't define it a priori. Then SL breaks down?
Maybe this is obvious but I guess I get a little uneasy about talking about information efficiency of RL and SL without a broader framework of equivalence and what information is being represented by the model in both cases. It seems to me RL is a kind of superset of SL in terms of what it is capable of representing, which maybe leads to inefficiencies when it's not being used to its fullest.
recent results like PEFT-Bench (arxiv.org/abs/2511.21285) found that while SFT is efficient for formatting, it actually degraded Llama-3-8B's reasoning on math and code tasks compared to the base model.
So is RL required to preserve those logic circuits?
There seems to be a trade-off in compute-efficiency and format vs intelligence
i think in order to make this kind of argument you would need to be able to show all of the trajectories that are effectively reachable as a result of pre-training, and then how much effective pruning takes place as a result of total adjustment of the weights in response to one RL sample.
In the limit, the "happy" case (positive reward), policy gradients boil down to performing more or less the same update as the usual supervised strategy for each generated token (or some subset of those if we use sampling). In the unhappy case, they penalise the model for selecting particular tokens in particular circumstances -- this is not something you can normally do with supervised learning, but it is unclear to what extent this is helpful (if a bad and a good answer share a prefix, it will be upvoted in one case and penalised in another case, not in the same exact way but still). So during on-policy learning we desperately need the model to stumble on correct answers often enough, and this can only happen if the model knows how to solve the problem to begin with, otherwise the search space is too big. In other words, while in supervised learning we moved away from providing models with inductive biases and trusting them to figure out everything by themselves, in RL this does not really seem possible.
The trick is to provide dense rewards, i.e. not only once full goal is reached, but a little bit for every random flailing of the agent in the approximately correct direction.
Article talks about all of this and references DeepSeek R1 paper[0], section 4.2 (first bullet point on PRM) on why this is much trickier to do than it appears.
The correct solutions and the viable paths probably are known to the trainers, just not to the trainee. Training only on problems where the solution is unknown but verifiable sounds like the ultimate hard mode, and pretty hard to justify unless you have a model that's already saturated the space of problems with known solutions.
(Actually, "pretty hard to justify" might be understating it. How can we confidently extract any signal from a failure to solve a problem if we don't even know if the problem is solvable?)
Your hard mode is exactly the situation that RL is used, because it requires neither a corpus of correct examples, nor insight into the structure of a good policy.
> How can we confidently extract any signal from a failure to solve a problem if we don't even know if the problem is solvable?)
You rule out all the stuff that doesn’t work.
Yes this is difficult and usually very costly. Credit assignment is a deep problem. But if you didn’t find yourself in a hard mode situation, you wouldn’t be using RL.
Now the confusing thing is that Dwarkesh Patel instead calls pretraining "supervised learning" and you call reinforcement learning a form of unsupervised learning.
SL and SSL are very similar "algorithmically": both use gradient descent on a loss function of predicting labels, human-provided (SL) or auto-generated (SSL). Since LLMs are pretrained on human texts, you might say that the labels (i.e., next token to predict) were in fact human provided. So, I see how pretraining LLMs blurs the line between SL and SSL.
In modern RL, we also train deep nets on some (often non trivial) loss function. And RL is generating its training data. Hence, it blurs the line with SSL. I'd say, however, it's more complex and more computationally expensive. You need many / long rollouts to find a signal to learn from. All of this process is automated. So, from this perspective, it blurs the line with UL too :-) Though it dependence on the reward is what makes the difference.
Overall, going from more structured to less structured, I'd order the learning approaches: SL, SSL (pretraining), RL, UL.
a large number of breakthroughs in AI are based on turning unsupervised learning into supervised learning (alphazero style MCTS as policy improvers are also like this). So the confusion is kind of intrinsic.
This is the first time I read that someone uses an acronym for ragebait purposes. The acronym "RL" is very well known. Dwarkesh's podcast is mostly AI related, so it's not a surprise that he will freely use acronyms. I think your take is very cynical.
That is a bizarre take. Dwarkesh Patel is publishing in a very specific domain, where RL is a very common and unambigous acronym. I'd bet it was immediately clear to 99% of his normal audience, and to him it's such a high frequency term that people finding it ambiguous would not even have crossed his mind.
(Like, would you expect people to expand LLM or AGI in a title?)
Ok so now it's stupid or malicious to use RL as reinforcement learning on a blog about AI where everyone in the field has been referring to it as RL forever? Even wikipedia puts (RL) after reinforcement learning.
The premise of this post and the one cited near the start (https://www.tobyord.com/writing/inefficiency-of-reinforcemen...) is that RL involves just 1 bit of learning for a rollout, rewarding success/failure.
However, the way I'm seeing this is that a RL rollout may involve, say, 100 small decisions out of a pool of 1,000 possible decisions. Each training step, will slightly upregulate/downregulate a given training step in the step's condition. There will be uncertainty about which decision was helpful/harmful -- we only have 1 bit of information after all -- but this setup where many steps are slowly learned across many examples seems like it would lend itself well to generalization (e.g., instead of 1 bit in one context, you get a hundred 0.01 bit insights across 100 contexts). There may be some benefits not captured by comparing the number of bits relative to pretraining.
As the blog says, "Fewer bits, sure, but very valuable bits", this also seems like a different factor that would also be true. Learning these small decisions may be vastly more valuable for producing accurate outputs than learning through pretraining.
RL is very important - because while it's inefficient, and sucks at creating entirely new behaviors or features in LLMs, it excels at bringing existing features together and tuning them to perform well.
It's a bit like LLM glue. The glue isn't the main material - but it's the one that holds it all together.
It is the same type of learning, fundamentally: increasing/decreasing token probabilities based on the left context. RL simply provides more training data from online sampling.
Dwarkesh's blogging confuses me, because I am not sure if the message is free-associating, or, relaying information gathered.
ex. how this reads if it is free-associating: "shower thought: RL on LLMs is kinda just 'did it work or not?' and the answer is just 'yes or no', yes or no is a boolean, a boolean is 1 bit, then bring in information theory interpretation of that, therefore RL doesn't give nearly as much info as, like, a bunch of words in pretraining"
or
ex. how this reads if it is relaying information gathered: "A common problem across people at companies who speak honestly with me about the engineering side off the air is figuring out how to get more out of RL. The biggest wall currently is the cross product of RL training being slowww and lack of GPUs. More than one of them has shared with me that if you can crack the part where the model gets very little info out of one run, then the GPU problem goes away. You can't GPU your way out of how little info they get"
I am continuing to assume it is much more A than B, given your thorough sounding explanation and my prior that he's not shooting the shit about specific technical problems off-air with multiple grunts.
He is essentially expanding upon an idea made by Andrej Karpathy on his podcast about a month prior.
Karpathy says that basically "RL sucks" and that it's like "sucking bits of supervision through a straw".
https://x.com/dwarkesh_sp/status/1979259041013731752/mediaVi...
Dwarkesh has a CS degree, but zero academic training or real world experience in deep learning, so all of his blogging is just secondhand bullshitting to further siphon off a veneer of expertise from his podcast guests.
There's some insights there about the base rate of correct responses and pretraining to boost that. Basically searching a suboptimal versus optimal area of the model space at a suboptimal versus optimal rate.
I think the framing of the discussion in general is kind of misleading though, because it kind of avoids the question of "information inefficient about what?"
In RL, the model is becoming more informative about a stimulus-action-feedback space; in SL the model is becoming more informative about a stimulus-feedback space. RL is effectively "built for" searching a larger space.
In situations like the essay where you are directly comparing SL and RL, you're kind of saying for RL "the action space is restricted to dictionary X and the feedback space is binary yes or no" and for SL "the feedback space is restricted to dictionary X". So in a certain sense you're equating the RL action space to the SL feedback space.
In that case, maybe searching over suboptimal regions of the RL-action-SL-feedback space is inefficient. But the reason why, I think RL exists is because it generalizes to situations where the feedback and action space is bigger. Maybe you want to differentially associate different responses with different rewards, or sample a response space that is so large that you can't define it a priori. Then SL breaks down?
Maybe this is obvious but I guess I get a little uneasy about talking about information efficiency of RL and SL without a broader framework of equivalence and what information is being represented by the model in both cases. It seems to me RL is a kind of superset of SL in terms of what it is capable of representing, which maybe leads to inefficiencies when it's not being used to its fullest.
recent results like PEFT-Bench (arxiv.org/abs/2511.21285) found that while SFT is efficient for formatting, it actually degraded Llama-3-8B's reasoning on math and code tasks compared to the base model.
So is RL required to preserve those logic circuits?
There seems to be a trade-off in compute-efficiency and format vs intelligence
i think in order to make this kind of argument you would need to be able to show all of the trajectories that are effectively reachable as a result of pre-training, and then how much effective pruning takes place as a result of total adjustment of the weights in response to one RL sample.
In the limit, the "happy" case (positive reward), policy gradients boil down to performing more or less the same update as the usual supervised strategy for each generated token (or some subset of those if we use sampling). In the unhappy case, they penalise the model for selecting particular tokens in particular circumstances -- this is not something you can normally do with supervised learning, but it is unclear to what extent this is helpful (if a bad and a good answer share a prefix, it will be upvoted in one case and penalised in another case, not in the same exact way but still). So during on-policy learning we desperately need the model to stumble on correct answers often enough, and this can only happen if the model knows how to solve the problem to begin with, otherwise the search space is too big. In other words, while in supervised learning we moved away from providing models with inductive biases and trusting them to figure out everything by themselves, in RL this does not really seem possible.
The trick is to provide dense rewards, i.e. not only once full goal is reached, but a little bit for every random flailing of the agent in the approximately correct direction.
Article talks about all of this and references DeepSeek R1 paper[0], section 4.2 (first bullet point on PRM) on why this is much trickier to do than it appears.
[0]: https://arxiv.org/abs/2501.12948
How do you know the correct direction? Isn’t the point of learning that the right path is unknown to start with?
The correct solutions and the viable paths probably are known to the trainers, just not to the trainee. Training only on problems where the solution is unknown but verifiable sounds like the ultimate hard mode, and pretty hard to justify unless you have a model that's already saturated the space of problems with known solutions.
(Actually, "pretty hard to justify" might be understating it. How can we confidently extract any signal from a failure to solve a problem if we don't even know if the problem is solvable?)
Your hard mode is exactly the situation that RL is used, because it requires neither a corpus of correct examples, nor insight into the structure of a good policy.
> How can we confidently extract any signal from a failure to solve a problem if we don't even know if the problem is solvable?)
You rule out all the stuff that doesn’t work.
Yes this is difficult and usually very costly. Credit assignment is a deep problem. But if you didn’t find yourself in a hard mode situation, you wouldn’t be using RL.
Bit of a nitpick, but I think his terminology is wrong. Like RL, pretraining is also a form of *un*supervised learning
Usual terminology for the three main learning paradigms:
- Supervised learning (e.g. matching labels to pictures)
- unsupervised learning / self-supervised learning (pretraining)
- reinforcement learning
Now the confusing thing is that Dwarkesh Patel instead calls pretraining "supervised learning" and you call reinforcement learning a form of unsupervised learning.
SL and SSL are very similar "algorithmically": both use gradient descent on a loss function of predicting labels, human-provided (SL) or auto-generated (SSL). Since LLMs are pretrained on human texts, you might say that the labels (i.e., next token to predict) were in fact human provided. So, I see how pretraining LLMs blurs the line between SL and SSL.
In modern RL, we also train deep nets on some (often non trivial) loss function. And RL is generating its training data. Hence, it blurs the line with SSL. I'd say, however, it's more complex and more computationally expensive. You need many / long rollouts to find a signal to learn from. All of this process is automated. So, from this perspective, it blurs the line with UL too :-) Though it dependence on the reward is what makes the difference.
Overall, going from more structured to less structured, I'd order the learning approaches: SL, SSL (pretraining), RL, UL.
A “pretrained” ResNet could easily have been trained through a supervised signal like ImageNet labels.
“Pretraining” is not a correlate of the learning paradigms, it is a correlate of the “fine-tuning” process.
Also LLM pretraining is unsupervised. Dwarkesh is wrong.
You could think of supervised learning as learning against a known ground truth, which pretraining certainly is.
a large number of breakthroughs in AI are based on turning unsupervised learning into supervised learning (alphazero style MCTS as policy improvers are also like this). So the confusion is kind of intrinsic.
Since it is not explicitly stated, "RL" in this article means Reinforcement Learning.
https://en.wikipedia.org/wiki/Reinforcement_learning
I, too, started parsing this as RL=real life and that’s why I found the headline interesting
Thank god. Was driving me mad.
It's a deliberate click/ragebait, not a mistake. It makes People click and talk about it, just like it happens here.
This is the first time I read that someone uses an acronym for ragebait purposes. The acronym "RL" is very well known. Dwarkesh's podcast is mostly AI related, so it's not a surprise that he will freely use acronyms. I think your take is very cynical.
That is a bizarre take. Dwarkesh Patel is publishing in a very specific domain, where RL is a very common and unambigous acronym. I'd bet it was immediately clear to 99% of his normal audience, and to him it's such a high frequency term that people finding it ambiguous would not even have crossed his mind.
(Like, would you expect people to expand LLM or AGI in a title?)
Never attribute to malice that which is adequately explained by stupidity.
Ok so now it's stupid or malicious to use RL as reinforcement learning on a blog about AI where everyone in the field has been referring to it as RL forever? Even wikipedia puts (RL) after reinforcement learning.
There needs to be a new law, applicable to posts on the Internet of any kind.
Because that law doesn't hold, when malice has a massive profit motive, and almost zero downside.
Spammers, popups, spam, clickbait, all of it and more, not stupid, but planned.
RLVR is the more particular term of art in this domain.
VR stands for verified rewards and is the single bit per rollout that is the heart of the post. Maybe we can convince dang to update the title.