I've found it very weird observing how CEOs across many companies behave as if they're part of some hivemind. When to do layoffs, how to implement office policies, and now pushing AI in the same way as if they have no brain of their own. It's very offputting. I can't tell if its collusion or whether maybe capitalism has its own goals that are pursued in lockstep by its creepy agents. I think the end goal is definitely to eliminate workers as much as they possibly can. And they think whoever can do that first will "win".
Having worked with CEOs of a few start-ups, I suspect that many CEOs feel like they belong to a small circle of people "in the know".
Part of it is going through the VC gauntlet, I believe. Let's face it, to get money from VCs, you need to abase yourself, to learn how to lie to them and to yourself, to focus only on the survival of your company and pretending that you're going to make lots of money, regardless of your original goals and ideals. If you're a bit of a techie, you have just entered a world of appearances, where [it feels like] pretending to be successful and knowing something more than others matters more than actually doing something. And being kicked out would mean losing funding, which means everything for which you've twisted yourself into something you were not.
I think that this strongly favors hivemind/mob thinking.
You might find some inspiration in this video of metronomes synchronising and desynchronysing [0]. It doesn't take much to cause things to happen at the same time, in the case the CEOs are likely all responding to the same market signals at the same time for the same reason. Probably interest rates.
What makes you think that it's only CEOs who look like they're part of a hive mind? It's got nothing to do with capitalism or it's "creepy agents". It's simply the human condition. It's literally company/human see company/human do. One company/management loudly kangs whatever their position is and others simply follow it because they think it's either "industry standard" or it's convenient somehow to them. That's all it is - people trying to be safe.
Let me give you a technical parallel. A couple of engineers/architects from Big Co. that's hugely successful leave and go to Hot Startup. There they proselytize their One True Way because honestly that's all they know. Everybody in Hot Startup goes along with it because they are Senior Engineers from Big Co. who are now plotting the course and Big Co. is HUGE so they know what they're talking about. Now because Hot Startup is suddenly using the One True Way everybody else in the market tries to copy them because that's obviously why Hot Startup is Hot. This leads to a job market where people optimize for things used by Hot Startup. This tilts the skill set of the general tech market towards the One True Way making it gospel to a lot of people. So hiring managers who don't know the first about anything suddenly start optimizing for One True techs and ask for 20 years experience with React. They think they're doing the safe thing by using the same tech stack used by everybody else - the industry standard. Never mind that the "industry standard" changes every time it's convenient.
This is the same thing for CEOs. Oh you're having a slightly down quarter and have to answer to investors? Say you're using AI. That's the in-thing and will give you that bump to ride out the quarter. You screwed up in 2021-22 and hired a fuckton of people who are just sitting on their hands costing the company money? Say AI and get rid of them because they're not productive. It's got nothing to do with collusion or anything like that. It's just that people have mismatched expectations and things happen downstream of these unmanaged expectations.
> This tilts the skill set of the general tech market towards the One True Way making it gospel to a lot of people
That lot of people literally cannot get hired anywhere but startups because everyone else isn't so naive
> things happen downstream of these unmanaged expectations
It sounds like a metric fuckton of people need to retire or get out of the way already if they can't set expectations despite being in the exact position where they should be able to talk to multiple audiences
None of these excuses appease the investors nor the heads-down employees. Shit will have to change sooner rather than later. Many factors will make it so. This is exactly what defines a tech bubble.
Sure, plenty of incompetence out there, but nobody wakes up thinking they’re doing wrong. They overpromise, they play it safe. Sometimes it works, sometimes it doesn’t.
As Picard says, "It is possible to commit no mistakes and still lose. That is not a weakness; that is life"
The "Ai Ecosystem" has its flaws but this article seems to just provide a description of how it is now, and how they want it to be, without a path towards there.
It's perfectly valid to point something out as a problem. Not every post needs to provide the solution as well. Even raising awareness of the issue is helpful.
> People worry that not being seen as mindless, uncritical AI cheerleaders will be a career-limiting move in the current environment of enforced conformity within tech
Yes. I am seeing this right now, but it's not just enough to join the chanting around the altar, you've got to be seen enthusiastically using AI as much as possible and it's turned the contributions of some of the most productive high level developers in my company into... ...well, slop.
I'm dealing with a bug from one such right now, where they wrote code that used my library, and it's not working for them, and they lack the understanding of their code to fix it, and the reason it's a bug I'm working on? Cursor couldn't fix it for them, and they lack the mental model to do so, because it's hard to mentally model code you didn't write.
I'm sure there's an inevitable "well, your company/colleague is doing it wrong then" critique incoming. And I agree.
But given that "doing it right" is often being defined as "using it as much as possible" by business leaders across the industry, then we get these paradoxical outcomes where doing so reduces productivity, but no-one is ready to admit that.
You've got to be AI-first, or AI-native, at least if you want the investors to stay invested.
I've never experienced such a collective adherence to tech hype (but probably only because my company couldn't figure out how to jam a blockchain and/or NFT in previously). Not even during the days when everyone was serverless, or "cloud-first" / "cloud-native".
It's wild.
TL;DR - we've always as an industry used appropriate tools for appropriate problems, right now it feels like so many of us are throwing AI slop at walls and seeing what sticks.
I think this piece makes a fair and important point about LLM hype and the need to treat it as a normal technology rather than a cult movement. The over-the-top marketing and constant “AI will change everything” drumbeat can definitely obscures the more grounded, practical ways it can be used day-to-day.
That said, every major technology wave has needed a similar level of push, hype, and momentum to reach mass adoption. The Internet existed for decades before the public knew what to do with it. AOL gave such a huge push with the “You’ve got mail”, endless free trial CDs and an almost manic push to bring it into homes for it to become the foundation of modern life. The same was true of personal computers: early machines like the Apple II or IBM PC were expensive, clunky, and had little practical software. But without the evangelism, marketing, and cultural hype that surrounded them, the entire ecosystem might never have matured. So while the AI frenzy can feel excessive, some level of over-excitement may be what turns the technology from niche tools into something broadly accessible and transformative — just as it did for the web and the PC before it.
> every major technology wave has needed a similar level of push, hype, and momentum to reach mass adoption
People were standing in line for the first iPhone. Gmail had a waiting list. Tesla sold EVs far faster than they could make them.
On the other hand, I now literally have AI icons blinking in several apps, begging to be used. This isn't a regular marketing push of a brand-new product, it is companies desperately trying to justify their billions of dollars of sunk costs by bolting AI onto their existing products.
That mass adoption has brought in the normalisation of automated surveillance, attention farming and arguably lowered peoples’ tolerance to “the other”. I’m currently not convinced it’s a net positive. Perhaps things would have gone better if the adoption had been slower.
I think this is exactly the right intuition. I think people hopelessly underestimate the human tendency to do nothing. We have this idea that if an innovation is good enough it should “sell itself”, and that’s almost never true because across all organizations, it’s almost always safer to do nothing, adopt nothing, keep doing what you’re doing.
No one gets fired for suggesting no change.
It takes a special level of hype where “doing nothing” is no longer the sensible choice.
Do I wish this hype was spread around to other technologies that are also awesome, of course. I’d love to help someone figure out a way to do that but as of now, we don’t know how to do that. Humans are very bad at holding two different ideas in their head.
But we don't need to do anything. We don't need AI and so we don't need a push for it. If AI is just a "normal" technology that has some legitimate uses, it doesn't need a huge boost, it doesn't need any hype at all. It can just be slowly discovered and used by the people who have a legitimate use for it. Doing nothing is often a good move.
“ technology that has some legitimate uses, it doesn't need a huge boost”
That’s what I’m disagreeing with. “Legitimate uses” isn’t something just hanging out in the ether to attach itself to useful technology it happens via a grinding sales process and big industry wide cultural changes.
People don’t like change.
I think AI and its knock-on effects in robotics will have massive productivity boosts in industries where productivity has been lagging for years. It will take decades and multiple boom-busts to happen to drag the population into change but it’ll happen.
I feel like this is another case of an article kind of missing its own point. It says the majority view among tech experts is that AI is overhyped. And why does that matter? I guess because:
> If we were to simply listen to the smart voices of those who aren't lost in the hype cycle, we might see that it is not inevitable that AI systems use content without the consent of creators, and it is not impossible to build AI systems that respect commitments to environmental sustainability. We can build AI that isn't centralized under the control of a handful of giant companies. Or any other definition of "good AI" that people might aspire to.
So it's saying "don't throw the baby out with the bathwater"? We can have our cake and eat it too, we can have AI and also have it not be damaging? Except we can't. The article gives these two claims as reasons for other things but doesn't realize they are the reasons we can't:
> because the platforms that have introduced "AI" to the public imagination are run by authoritarian extremists with deeply destructive agendas.
> tech leaders are collaborating with the current regime to punish free speech, fire anyone who dissents, and embolden the wealthy tycoons at the top to make ever-more-extreme statements, often at the direct expense of some of their own workers.
This kind of thing to me is like saying we could make a really great machine gun that would be used just for peacfully shooting tin cans, if only we could get people to stop using such things to kill each other. Well we can't do that as long as the people who make the decisions about making such things want to make things that kill people instead of putting holes in tin cans. (Not the people who actually make the guns! The people who make the decisions.) And so this part doesn't matter:
> But it's important to remember that there are a lot more of us. ... Very few agree with the hype bubble that the tycoons have been trying to puff up.
But the tycoons still have the money and still call the shots, and that's why we're in the mess we're in. Neither the technical reality nor the majoritarian reality matters to the tycoons. The above quoted statement could apply equal well to all sorts of aspects of modern society --- social media, streaming services, telecom, political ads, gerrymandering, you name it. Most people don't like a lot of what we've got.
And the reason for that is that we're in a situation where what most people want doesn't matter. All that matters is what a small bunch of people with a lot of money wants. So this is off base:
> Once in a while, you might hear some coverage of the critiques of AI, but even those will generally be from people outside the tech industry, and they will often solely be about frustrations or anger with the negative externalities of the centralized Big AI companies. Those are valid and vital critiques, but [...]
No buts. Those are valid and vital techniques, so we can stop there (for now). The technical concerns are of secondary importance at best. The real issue is the concentration of wealth and power. If there were no Google, no OpenAI, no Meta, no Nvidia, no Amazon, and so on, there would be no one in a position to ignore the technical issues and instead spew pre-enshittified garbage out and call it AI. And then maybe people could get down to the business of using it in non-evil ways. But worrying about how many technical people think this or that is a distraction from the issue that it doesn't matter what anyone thinks except a small group of disproportionately powerful people.
> yet nobody outside of that cohort will mention this reality ... you'll basically only hear hype ...
This doesn't match my experience talking with people outside of tech, and as such, the whole essay feels like a straw man. There definitely are some people who drank the kool-aid, but they seem like a minority? I don't live in the Bay Area though.
Yeah. I’m also not in the bay, but almost every tech person I know IRL is somewhere in the ballpark of this.
That, and nontech folks tend to assume it can do all the jobs except for the ones they actually know enough about to explain why actually an LLM can’t do that reliably enough to be even slightly okay.
The hype is justified. Nowadays I rarely write code directly anymore. I don't have to navigate around the codebase to trace data flow. I don't have to find mistakes causing a deadlock. The way I do my job compared to a year ago is completely different and I'm accomplishing more in the same amount of time. This isn't even including my usage in my personal life for education.
I'm skeptical the majority of tech experts are struggling to find the utility of them.
I've found it very weird observing how CEOs across many companies behave as if they're part of some hivemind. When to do layoffs, how to implement office policies, and now pushing AI in the same way as if they have no brain of their own. It's very offputting. I can't tell if its collusion or whether maybe capitalism has its own goals that are pursued in lockstep by its creepy agents. I think the end goal is definitely to eliminate workers as much as they possibly can. And they think whoever can do that first will "win".
Having worked with CEOs of a few start-ups, I suspect that many CEOs feel like they belong to a small circle of people "in the know".
Part of it is going through the VC gauntlet, I believe. Let's face it, to get money from VCs, you need to abase yourself, to learn how to lie to them and to yourself, to focus only on the survival of your company and pretending that you're going to make lots of money, regardless of your original goals and ideals. If you're a bit of a techie, you have just entered a world of appearances, where [it feels like] pretending to be successful and knowing something more than others matters more than actually doing something. And being kicked out would mean losing funding, which means everything for which you've twisted yourself into something you were not.
I think that this strongly favors hivemind/mob thinking.
That's not any different from programmers clamoring en masse to switch to the latest cool framework or fashionable coding trend.
but programmers do not switch "en-masse". that's a HN bubble.
Bootstrap and HTML5 had fanfare and cool gfx on launch so people on-boarded themselves real quick
You might find some inspiration in this video of metronomes synchronising and desynchronysing [0]. It doesn't take much to cause things to happen at the same time, in the case the CEOs are likely all responding to the same market signals at the same time for the same reason. Probably interest rates.
[0] https://www.youtube.com/watch?v=Aaxw4zbULMs
> CEOs across many companies behave as if they're part of some hivemind.
This hivemind is called Blackrock.
Aladdin, even.
What makes you think that it's only CEOs who look like they're part of a hive mind? It's got nothing to do with capitalism or it's "creepy agents". It's simply the human condition. It's literally company/human see company/human do. One company/management loudly kangs whatever their position is and others simply follow it because they think it's either "industry standard" or it's convenient somehow to them. That's all it is - people trying to be safe.
Let me give you a technical parallel. A couple of engineers/architects from Big Co. that's hugely successful leave and go to Hot Startup. There they proselytize their One True Way because honestly that's all they know. Everybody in Hot Startup goes along with it because they are Senior Engineers from Big Co. who are now plotting the course and Big Co. is HUGE so they know what they're talking about. Now because Hot Startup is suddenly using the One True Way everybody else in the market tries to copy them because that's obviously why Hot Startup is Hot. This leads to a job market where people optimize for things used by Hot Startup. This tilts the skill set of the general tech market towards the One True Way making it gospel to a lot of people. So hiring managers who don't know the first about anything suddenly start optimizing for One True techs and ask for 20 years experience with React. They think they're doing the safe thing by using the same tech stack used by everybody else - the industry standard. Never mind that the "industry standard" changes every time it's convenient.
This is the same thing for CEOs. Oh you're having a slightly down quarter and have to answer to investors? Say you're using AI. That's the in-thing and will give you that bump to ride out the quarter. You screwed up in 2021-22 and hired a fuckton of people who are just sitting on their hands costing the company money? Say AI and get rid of them because they're not productive. It's got nothing to do with collusion or anything like that. It's just that people have mismatched expectations and things happen downstream of these unmanaged expectations.
> This tilts the skill set of the general tech market towards the One True Way making it gospel to a lot of people
That lot of people literally cannot get hired anywhere but startups because everyone else isn't so naive
> things happen downstream of these unmanaged expectations
It sounds like a metric fuckton of people need to retire or get out of the way already if they can't set expectations despite being in the exact position where they should be able to talk to multiple audiences
None of these excuses appease the investors nor the heads-down employees. Shit will have to change sooner rather than later. Many factors will make it so. This is exactly what defines a tech bubble.
Sure, plenty of incompetence out there, but nobody wakes up thinking they’re doing wrong. They overpromise, they play it safe. Sometimes it works, sometimes it doesn’t.
As Picard says, "It is possible to commit no mistakes and still lose. That is not a weakness; that is life"
We have a very different understanding of what it means to make a mistake
This certainly fits with what I want to believe... But are there any good polls that would put some sort of statistical backbone into it?
„ 100% of tech experts I talk to“ - that seems like a legitimate population sample to support such broad statements.
The "Ai Ecosystem" has its flaws but this article seems to just provide a description of how it is now, and how they want it to be, without a path towards there.
It's perfectly valid to point something out as a problem. Not every post needs to provide the solution as well. Even raising awareness of the issue is helpful.
What path do you propose?
> People worry that not being seen as mindless, uncritical AI cheerleaders will be a career-limiting move in the current environment of enforced conformity within tech
Yes. I am seeing this right now, but it's not just enough to join the chanting around the altar, you've got to be seen enthusiastically using AI as much as possible and it's turned the contributions of some of the most productive high level developers in my company into... ...well, slop.
I'm dealing with a bug from one such right now, where they wrote code that used my library, and it's not working for them, and they lack the understanding of their code to fix it, and the reason it's a bug I'm working on? Cursor couldn't fix it for them, and they lack the mental model to do so, because it's hard to mentally model code you didn't write.
I'm sure there's an inevitable "well, your company/colleague is doing it wrong then" critique incoming. And I agree.
But given that "doing it right" is often being defined as "using it as much as possible" by business leaders across the industry, then we get these paradoxical outcomes where doing so reduces productivity, but no-one is ready to admit that.
You've got to be AI-first, or AI-native, at least if you want the investors to stay invested.
I've never experienced such a collective adherence to tech hype (but probably only because my company couldn't figure out how to jam a blockchain and/or NFT in previously). Not even during the days when everyone was serverless, or "cloud-first" / "cloud-native".
It's wild.
TL;DR - we've always as an industry used appropriate tools for appropriate problems, right now it feels like so many of us are throwing AI slop at walls and seeing what sticks.
I think this piece makes a fair and important point about LLM hype and the need to treat it as a normal technology rather than a cult movement. The over-the-top marketing and constant “AI will change everything” drumbeat can definitely obscures the more grounded, practical ways it can be used day-to-day.
That said, every major technology wave has needed a similar level of push, hype, and momentum to reach mass adoption. The Internet existed for decades before the public knew what to do with it. AOL gave such a huge push with the “You’ve got mail”, endless free trial CDs and an almost manic push to bring it into homes for it to become the foundation of modern life. The same was true of personal computers: early machines like the Apple II or IBM PC were expensive, clunky, and had little practical software. But without the evangelism, marketing, and cultural hype that surrounded them, the entire ecosystem might never have matured. So while the AI frenzy can feel excessive, some level of over-excitement may be what turns the technology from niche tools into something broadly accessible and transformative — just as it did for the web and the PC before it.
> every major technology wave has needed a similar level of push, hype, and momentum to reach mass adoption
People were standing in line for the first iPhone. Gmail had a waiting list. Tesla sold EVs far faster than they could make them.
On the other hand, I now literally have AI icons blinking in several apps, begging to be used. This isn't a regular marketing push of a brand-new product, it is companies desperately trying to justify their billions of dollars of sunk costs by bolting AI onto their existing products.
That mass adoption has brought in the normalisation of automated surveillance, attention farming and arguably lowered peoples’ tolerance to “the other”. I’m currently not convinced it’s a net positive. Perhaps things would have gone better if the adoption had been slower.
I think this is exactly the right intuition. I think people hopelessly underestimate the human tendency to do nothing. We have this idea that if an innovation is good enough it should “sell itself”, and that’s almost never true because across all organizations, it’s almost always safer to do nothing, adopt nothing, keep doing what you’re doing.
No one gets fired for suggesting no change.
It takes a special level of hype where “doing nothing” is no longer the sensible choice.
Do I wish this hype was spread around to other technologies that are also awesome, of course. I’d love to help someone figure out a way to do that but as of now, we don’t know how to do that. Humans are very bad at holding two different ideas in their head.
But we don't need to do anything. We don't need AI and so we don't need a push for it. If AI is just a "normal" technology that has some legitimate uses, it doesn't need a huge boost, it doesn't need any hype at all. It can just be slowly discovered and used by the people who have a legitimate use for it. Doing nothing is often a good move.
“ technology that has some legitimate uses, it doesn't need a huge boost”
That’s what I’m disagreeing with. “Legitimate uses” isn’t something just hanging out in the ether to attach itself to useful technology it happens via a grinding sales process and big industry wide cultural changes.
People don’t like change.
I think AI and its knock-on effects in robotics will have massive productivity boosts in industries where productivity has been lagging for years. It will take decades and multiple boom-busts to happen to drag the population into change but it’ll happen.
[dead]
I feel like this is another case of an article kind of missing its own point. It says the majority view among tech experts is that AI is overhyped. And why does that matter? I guess because:
> If we were to simply listen to the smart voices of those who aren't lost in the hype cycle, we might see that it is not inevitable that AI systems use content without the consent of creators, and it is not impossible to build AI systems that respect commitments to environmental sustainability. We can build AI that isn't centralized under the control of a handful of giant companies. Or any other definition of "good AI" that people might aspire to.
So it's saying "don't throw the baby out with the bathwater"? We can have our cake and eat it too, we can have AI and also have it not be damaging? Except we can't. The article gives these two claims as reasons for other things but doesn't realize they are the reasons we can't:
> because the platforms that have introduced "AI" to the public imagination are run by authoritarian extremists with deeply destructive agendas.
> tech leaders are collaborating with the current regime to punish free speech, fire anyone who dissents, and embolden the wealthy tycoons at the top to make ever-more-extreme statements, often at the direct expense of some of their own workers.
This kind of thing to me is like saying we could make a really great machine gun that would be used just for peacfully shooting tin cans, if only we could get people to stop using such things to kill each other. Well we can't do that as long as the people who make the decisions about making such things want to make things that kill people instead of putting holes in tin cans. (Not the people who actually make the guns! The people who make the decisions.) And so this part doesn't matter:
> But it's important to remember that there are a lot more of us. ... Very few agree with the hype bubble that the tycoons have been trying to puff up.
But the tycoons still have the money and still call the shots, and that's why we're in the mess we're in. Neither the technical reality nor the majoritarian reality matters to the tycoons. The above quoted statement could apply equal well to all sorts of aspects of modern society --- social media, streaming services, telecom, political ads, gerrymandering, you name it. Most people don't like a lot of what we've got.
And the reason for that is that we're in a situation where what most people want doesn't matter. All that matters is what a small bunch of people with a lot of money wants. So this is off base:
> Once in a while, you might hear some coverage of the critiques of AI, but even those will generally be from people outside the tech industry, and they will often solely be about frustrations or anger with the negative externalities of the centralized Big AI companies. Those are valid and vital critiques, but [...]
No buts. Those are valid and vital techniques, so we can stop there (for now). The technical concerns are of secondary importance at best. The real issue is the concentration of wealth and power. If there were no Google, no OpenAI, no Meta, no Nvidia, no Amazon, and so on, there would be no one in a position to ignore the technical issues and instead spew pre-enshittified garbage out and call it AI. And then maybe people could get down to the business of using it in non-evil ways. But worrying about how many technical people think this or that is a distraction from the issue that it doesn't matter what anyone thinks except a small group of disproportionately powerful people.
> yet nobody outside of that cohort will mention this reality ... you'll basically only hear hype ...
This doesn't match my experience talking with people outside of tech, and as such, the whole essay feels like a straw man. There definitely are some people who drank the kool-aid, but they seem like a minority? I don't live in the Bay Area though.
Yeah. I’m also not in the bay, but almost every tech person I know IRL is somewhere in the ballpark of this.
That, and nontech folks tend to assume it can do all the jobs except for the ones they actually know enough about to explain why actually an LLM can’t do that reliably enough to be even slightly okay.
The hype is justified. Nowadays I rarely write code directly anymore. I don't have to navigate around the codebase to trace data flow. I don't have to find mistakes causing a deadlock. The way I do my job compared to a year ago is completely different and I'm accomplishing more in the same amount of time. This isn't even including my usage in my personal life for education.
I'm skeptical the majority of tech experts are struggling to find the utility of them.
> I'm skeptical the majority of tech experts are struggling to find the utility of them.
Looks like we did not read the same article.
> I don't have to navigate around the codebase to trace data flow.
How big is your codebase?
personally i find llms to be absurdly capable, and people are just using them wrong
No true Scotsman, eh?