I've been building implementation guides for solo founders and small businesses
trying to use AI practically, so I read the PwC CEO Survey closely when it dropped.
The headline number (12% of CEOs generating measurable returns) gets cited a lot, but I think the more revealing finding is the 56% with zero financial impact.
These are companies with enterprise AI budgets, dedicated teams, and access to every tool on the market and the majority are getting nothing back.
PwC calls it "Pilot Purgatory." The pattern: AI gets deployed in isolated, tactical projects that don't connect to revenue. internal tooling, content drafts, meeting summaries while the 12% they call the "Vanguard" are using AI in the product and customer experience itself (44% of Vanguard vs 17% of everyone else).
What I found interesting from a solo founder angle: the structural barriers causing large companies to fail at this “bureaucracy, legacy systems, misaligned incentives, multi-department approval processes” don't exist at the one-person scale.
The bottleneck for small operators is different: it's not knowing which workflows are worth building, in what order, and what "system-level" vs "task-level" use actually means in practice.
Curious if others have a take on why the enterprise failure rate is this high despite the investment, and whether the Vanguard pattern (AI into the product, not just the back office) matches what people are seeing in practice.
I work in a large enterprise. On one hand, we’re being told we should think of ways to use AI more. On the other hand, to even start (beyond just using Copilot to develop what I’m already working on), I need to have an idea and sell it to some AI board to get their blessing. At that point, I will have a microscope on me, tracking everything, to watch if this wild experiment is a success or failure. No thanks.
If they really want me to try something new, they will give me the space to try things where I am free to fail quietly and privately, pivot, and continue trying things. Asking for ship dates on day one is no way to operate projects with so many unknown unknowns. No one wants to learn and fail with an audience.
That’s hard with AI, because early efforts are exploratory by nature. You don’t really know the shape of the value until you’ve iterated.
If experimentation immediately becomes a public performance review, the safest move is not to experiment.
I think this is a big part of why so many enterprise initiatives stall. The org says it wants discovery, but the governance model assumes delivery.
Your point about needing space to fail quietly is important.
That is kind of weird take, because whole my life, people WANTED to be part of initiatives like this and were jealous of people selected for initiatives like that.
The following is my take on what's happening — outside the software-development domain, which is special vis-à-vis LLMs for obvious reasons.
Given worker access to generative LLMs, plus training and motivation to use them, LLMs are effective for certain workflows. Those workflows tend to be personal, one-offs, or summarization in nature: write a bash script for this headache I have every day; tell me what colleague X is trying to say in his 1200-word email, since his writing is garbage and he can't get to the point; "what's the Excel formula syntax for this other thing that I keep forgetting?"; etc.
So the time and mental-energy savings inures to the workers, mostly from coordination tasks that don't directly create core value. And then those savings aren't "reinvested" into value-producing activities whose benefits would inure to the firm because the workers have no incentive to do so; don't know how to create core value; don't have the skills to create core value; or aren't permitted to do those activities by higher-ups.
Bottom line: LLMs are eating busywork coordination activities — hence no impact on most firms' bottom lines.
Exactly!
this aligns with the "pilot purgatory" pattern.
AI boosts productivity at the task level, but unless those savings are applied to workflows that directly drive revenue or strategic value, the firm sees little financial impact.
It's a classic misalignment between individual efficiency and organizational ROI.
> PwC calls it "Pilot Purgatory." The pattern: AI gets deployed in isolated, tactical projects that don't connect to revenue.
I feel like both the name and the description miss the mark though - the use isn't in pilots or isolated projects, it's individual people using it to find stuff and read/write/code/work/make decisions for them, and none of that is going to drive strategic value until companies raise expectations on productivity to take advantage of it.
It makes me think of a couple of bullet points from that "An AI CEO said something honest" post[1]:
> - majority of workers have no reason to be super motivated, they want to do their 9-5 and get back to their life
> - they're not using AI to be 10x more effective they're using it to churn out their tasks with less energy spend
Yeah, the reluctance often comes from the learning curve, resistance to change, and fear of being let go "employees see it happen to others".
Motivation might shift if organizations provide psychological safety, training, and space to experiment, showing that AI can enhance the work rather than just replace it.
> The bottleneck for small operators is different: it's not knowing which workflows are worth building, in what order, and what "system-level" vs "task-level" use actually means in practice.
Are you saying that from what you see, small operators also fail to get ROI, but for different reasons?
yes, but not at the same rate. and yes it's usually for different reasons.
Enterprises usually struggle because of structure: approvals, incentives, legacy systems, fragmentation.
Small operators usually struggle because they stay at the task level "prompt-by-prompt productivity boosts" instead of building workflow-level or system-level leverage.
The question is whether legacy players can drive strategic growth that changes their trajectory to meet the AI-native disrupters. This is a data point.
Exactly!
having the budget isn't enough. Legacy players need to adapt processes and incentives to turn AI investment into real strategic advantage, or AI-native disruptors will outpace them.
AI-native disruptors are designing products and experiences around AI from inception, rapidly capturing value and reshaping customer expectations. In the near term, for some, that is a raising red flag.
The average person is not ready for AI yet. Microsoft's Copilot has a low adoption rate. Data Centers have big energy bills and a lack of clients, and have no ROI for most of them.
I think you’re pointing at something real. Adoption lag matters.
If the end user doesn't change behavior, ROI won’t show up no matter how much infrastructure gets built.
I’d add another layer though: expectations. Many CEOs implicitly treat AI like deterministic software. install it, flip the switch, get linear productivity gains.
But these systems are probabilistic. They’re "slippery" Output quality varies, edge cases multiply, and oversight is required. That makes ROI non-linear.
I've been building implementation guides for solo founders and small businesses trying to use AI practically, so I read the PwC CEO Survey closely when it dropped.
The headline number (12% of CEOs generating measurable returns) gets cited a lot, but I think the more revealing finding is the 56% with zero financial impact.
These are companies with enterprise AI budgets, dedicated teams, and access to every tool on the market and the majority are getting nothing back.
PwC calls it "Pilot Purgatory." The pattern: AI gets deployed in isolated, tactical projects that don't connect to revenue. internal tooling, content drafts, meeting summaries while the 12% they call the "Vanguard" are using AI in the product and customer experience itself (44% of Vanguard vs 17% of everyone else).
What I found interesting from a solo founder angle: the structural barriers causing large companies to fail at this “bureaucracy, legacy systems, misaligned incentives, multi-department approval processes” don't exist at the one-person scale.
The bottleneck for small operators is different: it's not knowing which workflows are worth building, in what order, and what "system-level" vs "task-level" use actually means in practice.
Curious if others have a take on why the enterprise failure rate is this high despite the investment, and whether the Vanguard pattern (AI into the product, not just the back office) matches what people are seeing in practice.
I work in a large enterprise. On one hand, we’re being told we should think of ways to use AI more. On the other hand, to even start (beyond just using Copilot to develop what I’m already working on), I need to have an idea and sell it to some AI board to get their blessing. At that point, I will have a microscope on me, tracking everything, to watch if this wild experiment is a success or failure. No thanks.
If they really want me to try something new, they will give me the space to try things where I am free to fail quietly and privately, pivot, and continue trying things. Asking for ship dates on day one is no way to operate projects with so many unknown unknowns. No one wants to learn and fail with an audience.
That’s hard with AI, because early efforts are exploratory by nature. You don’t really know the shape of the value until you’ve iterated. If experimentation immediately becomes a public performance review, the safest move is not to experiment. I think this is a big part of why so many enterprise initiatives stall. The org says it wants discovery, but the governance model assumes delivery. Your point about needing space to fail quietly is important.
That is kind of weird take, because whole my life, people WANTED to be part of initiatives like this and were jealous of people selected for initiatives like that.
The following is my take on what's happening — outside the software-development domain, which is special vis-à-vis LLMs for obvious reasons.
Given worker access to generative LLMs, plus training and motivation to use them, LLMs are effective for certain workflows. Those workflows tend to be personal, one-offs, or summarization in nature: write a bash script for this headache I have every day; tell me what colleague X is trying to say in his 1200-word email, since his writing is garbage and he can't get to the point; "what's the Excel formula syntax for this other thing that I keep forgetting?"; etc.
So the time and mental-energy savings inures to the workers, mostly from coordination tasks that don't directly create core value. And then those savings aren't "reinvested" into value-producing activities whose benefits would inure to the firm because the workers have no incentive to do so; don't know how to create core value; don't have the skills to create core value; or aren't permitted to do those activities by higher-ups.
Bottom line: LLMs are eating busywork coordination activities — hence no impact on most firms' bottom lines.
Exactly! this aligns with the "pilot purgatory" pattern. AI boosts productivity at the task level, but unless those savings are applied to workflows that directly drive revenue or strategic value, the firm sees little financial impact. It's a classic misalignment between individual efficiency and organizational ROI.
> PwC calls it "Pilot Purgatory." The pattern: AI gets deployed in isolated, tactical projects that don't connect to revenue.
I feel like both the name and the description miss the mark though - the use isn't in pilots or isolated projects, it's individual people using it to find stuff and read/write/code/work/make decisions for them, and none of that is going to drive strategic value until companies raise expectations on productivity to take advantage of it.
It makes me think of a couple of bullet points from that "An AI CEO said something honest" post[1]:
> - majority of workers have no reason to be super motivated, they want to do their 9-5 and get back to their life
> - they're not using AI to be 10x more effective they're using it to churn out their tasks with less energy spend
[1] https://news.ycombinator.com/item?id=47042788
Yeah, the reluctance often comes from the learning curve, resistance to change, and fear of being let go "employees see it happen to others". Motivation might shift if organizations provide psychological safety, training, and space to experiment, showing that AI can enhance the work rather than just replace it.
> The bottleneck for small operators is different: it's not knowing which workflows are worth building, in what order, and what "system-level" vs "task-level" use actually means in practice.
Are you saying that from what you see, small operators also fail to get ROI, but for different reasons?
yes, but not at the same rate. and yes it's usually for different reasons.
Enterprises usually struggle because of structure: approvals, incentives, legacy systems, fragmentation.
Small operators usually struggle because they stay at the task level "prompt-by-prompt productivity boosts" instead of building workflow-level or system-level leverage.
The question is whether legacy players can drive strategic growth that changes their trajectory to meet the AI-native disrupters. This is a data point.
Exactly! having the budget isn't enough. Legacy players need to adapt processes and incentives to turn AI investment into real strategic advantage, or AI-native disruptors will outpace them.
Are these AI-native disruptors in the room with us now?
AI-native disruptors are designing products and experiences around AI from inception, rapidly capturing value and reshaping customer expectations. In the near term, for some, that is a raising red flag.
Who? The only “disrupters” I see are AI hypesters selling AI tools.
Who are the people using these tools to create successful businesses and (non-AI) products?
Their bots are.
Related:
AI adoption and Solow's productivity paradox
https://news.ycombinator.com/item?id=47055979
Buying a gym membership has never made anyone fit.
True, but it's also more than just using the tool, it's also how it's applied.
The average person is not ready for AI yet. Microsoft's Copilot has a low adoption rate. Data Centers have big energy bills and a lack of clients, and have no ROI for most of them.
I think you’re pointing at something real. Adoption lag matters. If the end user doesn't change behavior, ROI won’t show up no matter how much infrastructure gets built. I’d add another layer though: expectations. Many CEOs implicitly treat AI like deterministic software. install it, flip the switch, get linear productivity gains. But these systems are probabilistic. They’re "slippery" Output quality varies, edge cases multiply, and oversight is required. That makes ROI non-linear.
> 56% of CEOs report zero financial return from AI in 2026 (PwC survey, n=4,454)
This is a lie. It can't be zero. It is negative.