This is very cool but it's not quite what I expected out of economic primitives.
I expected to see measures of the economic productivity generated as a result of artificial intelligence use.
Instead, what I'm seeing is measures of artificial intelligence use.
I don't really see how this is measuring the most important economic primitives. Nothing related to productivity at all actually. Everything about how and where and who... This is just demographics and usage statistics...
> I expected to see measures of the economic productivity generated as a result of artificial intelligence use.
>Instead, what I'm seeing is measures of artificial intelligence use.
Fun fact: this is also how most large companies are measuring their productivity increases from AI usage ;), alongside asking employees to tell them how much faster AI is making them while simultaneously telling them they're expected to go faster with AI.
The title actually cringes me out a bit, it reads like early report titles in academia where young students (myself no doubt incl back when) try their hardest at making a title sound clever but in actuality only achieve obscuration of their own material.
> This also highlights the importance of model design and training. While Claude is able to respond in a highly sophisticated manner, it tends to do so only when users input sophisticated prompts.
If the output of the model depends on the intelligence of the person picking outputs out of its training corpus, is the model intelligent?
This is kind of what I don't quite understand when people talk about the models being intelligent. There's a huge blindspot, which is that the prompt entirely determines the output.
Humans also respond differently when prompted in different ways. For example, politeness often begets politeness. I would expect that to be reflected in training data.
If I, a moron, hire a PhD to crack a tough problem for me, I don't need to go back and forth prompting him at a PhD level. I can set him loose on my problem and he'll come back to me with a solution.
What is a "sophisticated prompt"? What if I just tack on "please think about this a lot and respond in a highly sophisticated manner" to my question/prompt? Anyone can do this once they're made aware of this potential issue. Sometimes the UX layer even adds this for you in the system prompt, you just have to tick the checkbox for "I want a long, highly sophisticated answer".
In general it will match the language style you use.
If you ask a sophisticated question (lots of clauses, college reading level or above) it will respond in kind.
You are basically moving where the generation happens in the latent space. By asking in a sophisticated way you are moving the latent space away from say children's books and towards say PhD dissertations.
They have a chart that shows it. The education level of the input determines the education level of the output.
These things are supposed to have intelligence on tap. I'll imagine this in a very simple way. Let's say "intellignce" is like a fluid. It's a finite thing. Intelligence is very valuable, it's the substrate for real-world problem solving that makes these things ostensibly worth trillions of dollars. Intelligence comes from interaction with the world; someone's education and experience. You spend some effort and energy feeding someone, clothing them, sending them to college. And then you get something out, which is intelligence that can create value for society.
When you are having a conversation with the AI, is the intelligence flowing out of the AI? Or is it flowing out of the human operator?
The answer to this question is extremely important. If the AI can be intelligent "on its own" without a human operator, then it will be very valuable -- feed electricity into a datacenter and out comes business value. But if a model is only intelligent as someone using it, well, the utility seems to be very harshly capped. At best it saves a bit of time, but it will never do anything novel, it will never create value on its own, independently, it will never scale beyond a 1:1 "human picking outputs".
If you must encode intelligence into the prompt to get intelligence out of the model, well, this doesn't quite look like AGI does it?
ofc what I'm getting at is, you can't get something from nothing. There is no free lunch.
You spend energy distilling the intelligence of the entire internet into a set of weights, but you still had to expend the energy to have humans create the internet first. And on top of this, in order to pick out what you want from the corpus, you have to put some energy in: first, the energy of inference, but second and far more importantly, the energy of prompting. The model is valuable because the dataset is valuable; the model output is valuable because the prompt is valuable.
So wait then, where does this exponential increase in value come from again?
A smart person will tailor their answers to the perceived level of knowledge of the person asking, and the sophistication of the question is a big indicator of this.
* value seems highly concentrated in a sliver of tasks - the top ten accounting for 32%, suggesting a fat long-tail where it may be less useful/relevant.
* productivity drops to a more modest 1-1.2% productivity gain once you account for humans correcting AI failure. 1% is still plenty good, especially given the historical malaise here of only like 2% growth but it's not like industrial revolution good.
* reliability wall - 70% success rate is still problematic and we're getting down to 50% with just 2+ hours of task duration or about "15 years" of schooling in terms of complexity for API. For web-based multi-turn it's a bit better but I'd imagine that would at least partly due to task-selection bias.
> These “primitives”—simple, foundational measures of how Claude is used, which we generate by asking Claude specific questions about anonymized Claude.ai and first-party (1P) API transcripts
I just skimmed but is there any manual verification / human statistical analysis done on this or we just taking Claude’s word for it?
I'm not an economist so can someone explain whether this stat is significant:
> a sustained increase of 1.0 percentage point per year for the next ten years would return US productivity growth to rates that prevailed in the late 1990s and early 2000s
What can it be compared to? Is it on the same level of productivity growth as computers? The internet? Sliced bread?
Every single AI economic analysis talks about travel planning but none of the AI labs have the primitives (transit routing, geocoding, etc.) in a semantic interface for the models to use.
no payoff whatsoever? I just asked Claude to do a task that would have previously taken me four days. Then I got up and got lunch, and when I was back, it was done.
I would never make the argument that there are no risks. But there's also no way you can make the argument there are no payoffs!
That's not a very constructive thought given you don't know what the task is or why it could have taken them days. In a field as large and complex as software, there are myriad reasons why any single person could find substantial time-saving opportunities with LLMs, and it doesn't have to point to their own inadequacies.
All of this performative bullshit coming out of Anthropic is slowly but surely making them my least favorite AI company.
We get it guys the very scary future is here any minute now and you’re the only ones taking it super seriously and responsibly and benevolently. That’s great. Now please just build the damn thing
These are economic studies on AI's impact on productivity, jobs, wages, global inequality. It's important to UNDERSTAND who benefits from technology and who gets left behind. Even putting the positive impacts of a study like this aside - this kinda due diligence is critical for them to understand developing markets and how to reach them.
This is very cool but it's not quite what I expected out of economic primitives.
I expected to see measures of the economic productivity generated as a result of artificial intelligence use.
Instead, what I'm seeing is measures of artificial intelligence use.
I don't really see how this is measuring the most important economic primitives. Nothing related to productivity at all actually. Everything about how and where and who... This is just demographics and usage statistics...
> I expected to see measures of the economic productivity generated as a result of artificial intelligence use.
>Instead, what I'm seeing is measures of artificial intelligence use.
Fun fact: this is also how most large companies are measuring their productivity increases from AI usage ;), alongside asking employees to tell them how much faster AI is making them while simultaneously telling them they're expected to go faster with AI.
Until AI is used to generate new revenue streams (i.e. acquire new customers), I don’t think the economic impact is going to impress. My two cents.
agree, was similarly hoping for something akin to a total factor productivity argument
The title actually cringes me out a bit, it reads like early report titles in academia where young students (myself no doubt incl back when) try their hardest at making a title sound clever but in actuality only achieve obscuration of their own material.
Reminds me of psychohistory.
> This also highlights the importance of model design and training. While Claude is able to respond in a highly sophisticated manner, it tends to do so only when users input sophisticated prompts.
If the output of the model depends on the intelligence of the person picking outputs out of its training corpus, is the model intelligent?
This is kind of what I don't quite understand when people talk about the models being intelligent. There's a huge blindspot, which is that the prompt entirely determines the output.
Humans also respond differently when prompted in different ways. For example, politeness often begets politeness. I would expect that to be reflected in training data.
If I, a moron, hire a PhD to crack a tough problem for me, I don't need to go back and forth prompting him at a PhD level. I can set him loose on my problem and he'll come back to me with a solution.
Well if it ever gets to be a full replacement for phds, you’ll know cause it will have already replaced you.
I think that's what is happening. It's simulating a conversation, after all. A bit like code switching.
What is a "sophisticated prompt"? What if I just tack on "please think about this a lot and respond in a highly sophisticated manner" to my question/prompt? Anyone can do this once they're made aware of this potential issue. Sometimes the UX layer even adds this for you in the system prompt, you just have to tick the checkbox for "I want a long, highly sophisticated answer".
In general it will match the language style you use.
If you ask a sophisticated question (lots of clauses, college reading level or above) it will respond in kind.
You are basically moving where the generation happens in the latent space. By asking in a sophisticated way you are moving the latent space away from say children's books and towards say PhD dissertations.
They have a chart that shows it. The education level of the input determines the education level of the output.
These things are supposed to have intelligence on tap. I'll imagine this in a very simple way. Let's say "intellignce" is like a fluid. It's a finite thing. Intelligence is very valuable, it's the substrate for real-world problem solving that makes these things ostensibly worth trillions of dollars. Intelligence comes from interaction with the world; someone's education and experience. You spend some effort and energy feeding someone, clothing them, sending them to college. And then you get something out, which is intelligence that can create value for society.
When you are having a conversation with the AI, is the intelligence flowing out of the AI? Or is it flowing out of the human operator?
The answer to this question is extremely important. If the AI can be intelligent "on its own" without a human operator, then it will be very valuable -- feed electricity into a datacenter and out comes business value. But if a model is only intelligent as someone using it, well, the utility seems to be very harshly capped. At best it saves a bit of time, but it will never do anything novel, it will never create value on its own, independently, it will never scale beyond a 1:1 "human picking outputs".
If you must encode intelligence into the prompt to get intelligence out of the model, well, this doesn't quite look like AGI does it?
ofc what I'm getting at is, you can't get something from nothing. There is no free lunch.
You spend energy distilling the intelligence of the entire internet into a set of weights, but you still had to expend the energy to have humans create the internet first. And on top of this, in order to pick out what you want from the corpus, you have to put some energy in: first, the energy of inference, but second and far more importantly, the energy of prompting. The model is valuable because the dataset is valuable; the model output is valuable because the prompt is valuable.
So wait then, where does this exponential increase in value come from again?
the same place an increase in power comes from when you use a lever.
i don't know, are we intelligent?
you could argue that our input (senses) entirely define the output (thoughts, muscle movements, etc)
There's a bit of baked-in stuff as well. We are a full culture-mind-body[-spirit] system.
A smart person will tailor their answers to the perceived level of knowledge of the person asking, and the sophistication of the question is a big indicator of this.
Skimmed, some notes for a more 'bear' case:
* value seems highly concentrated in a sliver of tasks - the top ten accounting for 32%, suggesting a fat long-tail where it may be less useful/relevant.
* productivity drops to a more modest 1-1.2% productivity gain once you account for humans correcting AI failure. 1% is still plenty good, especially given the historical malaise here of only like 2% growth but it's not like industrial revolution good.
* reliability wall - 70% success rate is still problematic and we're getting down to 50% with just 2+ hours of task duration or about "15 years" of schooling in terms of complexity for API. For web-based multi-turn it's a bit better but I'd imagine that would at least partly due to task-selection bias.
,,1% is still plenty good, especially given the historical malaise here of only like 2% growth but it's not like industrial revolution good.''
You can't compare the speed of AI improvements to the speed of technical improvements during the industrial revolution. ChatGPT is 3 years old.
> These “primitives”—simple, foundational measures of how Claude is used, which we generate by asking Claude specific questions about anonymized Claude.ai and first-party (1P) API transcripts
I just skimmed but is there any manual verification / human statistical analysis done on this or we just taking Claude’s word for it?
Looks like they are relying on Claude for it, which is interesting. I bet social scientists are going to love this approach
I'm not an economist so can someone explain whether this stat is significant:
> a sustained increase of 1.0 percentage point per year for the next ten years would return US productivity growth to rates that prevailed in the late 1990s and early 2000s
What can it be compared to? Is it on the same level of productivity growth as computers? The internet? Sliced bread?
Every single AI economic analysis talks about travel planning but none of the AI labs have the primitives (transit routing, geocoding, etc.) in a semantic interface for the models to use.
Coincidentally, YouTube demos on vibe coding commonly make travel planning apps!
> How is AI reshaping the economy?
oh I know this one!
it's created mountains of systemic risk for absolutely no payoff whatsoever!
no payoff whatsoever? I just asked Claude to do a task that would have previously taken me four days. Then I got up and got lunch, and when I was back, it was done.
I would never make the argument that there are no risks. But there's also no way you can make the argument there are no payoffs!
> I just asked Claude to do a task that would have previously taken me four days.
I think this probably says more about you than the "AI"
That's not a very constructive thought given you don't know what the task is or why it could have taken them days. In a field as large and complex as software, there are myriad reasons why any single person could find substantial time-saving opportunities with LLMs, and it doesn't have to point to their own inadequacies.
All of this performative bullshit coming out of Anthropic is slowly but surely making them my least favorite AI company.
We get it guys the very scary future is here any minute now and you’re the only ones taking it super seriously and responsibly and benevolently. That’s great. Now please just build the damn thing
These are economic studies on AI's impact on productivity, jobs, wages, global inequality. It's important to UNDERSTAND who benefits from technology and who gets left behind. Even putting the positive impacts of a study like this aside - this kinda due diligence is critical for them to understand developing markets and how to reach them.
Ok Dario