As someone who appreciates machine learning, the main dissonance I have with interacting with Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you".
This appears everywhere, with every tool trying to autocomplete every sentence and action, creating a very clunky ecosystem where I am constantly pressing 'escape' and 'backspace' to undo some action that is trying to rewrite what I am doing to something I don't want or didn't intend.
It is wasting time and none of the things I want are optimized, their tools feel like they are helping people write "good morning team, today we are going to do a Business, but first we must discuss the dinner reservations" emails.
I broadly agree. They package "copilot" in a way that constantly gets in your way.
The one time I thought it could be useful, in diagnosing why two Azure services seemingly couldn't talk to each other, it was completely useless.
I had more success describing the problem in vague terms to a different LLM, than an AI supposedly plugged into the Azure organisation that could supposedly directly query information.
>As someone who appreciates machine learning, the main dissonance I have with interacting with Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you".
This the nightmare scenario with AI, ie people settling for Microsoft/OpenAI et al to do the "thinking" for you.
It is alluring but of course it is not going to work. It is similar to what happened to the internet via social media, ie "kickback and relax, we'll give you what you really want, you don't have to take any initiative".
My pitch against this is to vehemently resist the chatbot-style solutions/interfaces and demand intelligent workspaces:
> ...Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you"
I feel like that describes nearly all of the "productivity" tools I see in AI ads. Sadly enough, it also aligns with how most people use it, in my personal experience. Just a total off-boarding of needing to think.
AI agent technology likely isn’t ready for the kind of high-stakes autonomous business work Microsoft is promising.
It's unbelievable to me that tech leaders lack the insight to recognize this.
So how to explain the current AI mania being widely promoted?
I think the best fit explanation is simple con artistry. They know the product is fundamentally flawed and won't perform as being promised. But the money to be made selling the fantasy is simply too good to ignore.
In other words --- pure greed. Over the longer term, this is a weakness, not a strength.
Don’t attribute to malice that which can equally be contributed to incompetence.
I think you’re over-estimating the capabilities of these tech leaders, especially when the whole industry is repeating the same thing. At that point, it takes a lot of guts to say “No, we’re not going to buy into the hype, we’re going to wait and see” because it’s simply a matter of corporate politics: if AI fails to deliver, it fails to deliver for everyone and the people that bought into the hype can blame the consultants / whatever.
If, however, AI ended up delivering and they missed the boat, they’re going to be held accountable.
It’s much less risky to just follow industry trends. It takes a lot of technical knowledge, gut, and confidence in your own judgement to push back against an industry-wide trend at that level.
I suspect that AI is in an "uncanny valley" where it is definitely good enough for some demos, but will fail pretty badly when deployed.
If it works 99% of the time, then a demo of 10 runs is 90% likely to succeed. Even if it fails, as long as it's not spectacular, you can just say "yeah, but it's getting better every day!", and "you'll still have the best 10% of your human workers in the loop".
When you go to deploy it, 99% is just not good enough. The actual users will be much more noisy than the demo executives and internal testers.
When you have a call center with 100 people taking 100 calls per day, replacing those 10,000 calls with 99% accurate AI means you have to clean up after 100 bad calls per day. Some percentage of those are going to be really terrible, like the AI did reputational damage or made expensive legally binding promises. Humans will make mistakes, but they aren't going to give away the farm or say that InsuranceCo believes it's cheaper if you die. And your 99% accurate-in-a-lab AI isn't 99% accurate in the field with someone with a heavy accent on a bad connection.
So I think that the parties all "want to believe", and to an untrained eye, AI seems "good enough" or especially "good enough for the first tier".
> if AI fails to deliver, it fails to deliver for everyone and the people that bought into the hype can blame the consultants / whatever.
Understatement of the year. At this point, if AI fails to deliver, the US economy is going to crash. That would not be the case if executives hadn't bought in so hard earlier on.
Ultimately it's a distinction without a difference. Maliciously stupid or stupidly malicious invariably leads to the same place.
The discussion we should be having is how we can come together to remove people from power and minimize the influence they have on society.
We don't have the carbon budget to let billionaires who conspires from island fortresses in Hawaii do this kind of reckless stuff.
It's so dismaying to see these industries muster the capital and political resources to make these kinds of infrastructure projects a reality when they've done nothing comparable w.r.t to climate change.
It tells me that the issue around the climate has always been a lack of will not ability.
It's part of a larger economic con centered on the financial industry and the financialization of American industry. If you want this stuff to stop, you have to be hoping (or even working toward) a correction that wipes out the incumbents who absolutely are working to maintain the masqerade.
It will hurt, and they'll scare us with the idea that it will hurt, but the secret is that we get to choose where it hurts - the same as how they've gotten to choose the winners and losers for the past two decades.
The author argues that this con has been caused by three relatively simple levers: Low dividend yields, legalization of stock buybacks, and executive compensation packages that generate lots of wealth under short pump-and-dump timelines.
If those are the causes, then simple regulatory changes to make stock buybacks illegal again, limit the kinds of executive compensation contracts that are valid, and incentivize higher dividend yields/penalize sales yields should return the market to the previous long-term-optimized behavior.
I doubt that you could convince the politicians and financiers who are currently pulling value out of a fragile and inefficient economy under the current system to make those changes, and if the changes were made I doubt they could last or be enforced given the massive incentives to revert to our broken system. I think you're right that it will take a huge disaster that the wealthy and powerful are unable to dodge and unable to blame on anything but their own actions, I just don't know what that event might look like.
> correction that wipes out the incumbents who absolutely are working to maintain the masqerade
You need to also have a robust alternative that grows quickly in the cleared space. In 2008 we got a correction that cleared the incumbents, but the ensuing decade of policy choices basically just allowed the thing to re-grow in a new form.
I thought we pretty explicitly bailed out most of the incumbents. A few were allowed to be sacrificed, but most of the risk wasn't realized, and instead rolled into new positions that diffused it across the economy. 2008's "correction" should have seen the end of most of our investment banks and auto manufacturers. Say what you want to about them (and I have no particular love for either), but Tesla and Bitcoin are ghosts of the timeline where those two sectors had to rebuild themselves from scratch. There should have been more, and Goldman Sachs and GM et al. should not currently exist.
> A few were allowed to be sacrificed, but most of the risk wasn't realized, and instead rolled into new positions that diffused it across the economy.
Yeah that's a more accurate framing, basically just saying that in '08 we put out the fire and rehabbed the old growth rather than seeding the fresh ground.
> Tesla and Bitcoin are ghosts of the timeline where those two sectors had to rebuild themselves from scratch
I disagree, I think they're artifacts of the rehab environment (the ZIRP policy sphere). I think in a world where we fully ate the loss of '08 and started in a new direction you might get Tesla, but definitely not TSLA, and the version we got is really (Tesla+TSLA) IMO. Bitcoin to me is even less of a break with the pre-08 world; blockchain is cool tech but Bitcoin looks very much "Financial Derivatives, Online". I think an honest correction to '08 would have been far more of a focus on "hard tech and value finance", rather than inventing new financial instruments even further distanced from the value-generation chain.
> Goldman Sachs and GM et al. should not currently exist.
Looking forward to the OpenAI (and Anthropic) IPOs. It’s funny to me that this info is being “leaked” - they are sussing out the demand. If they wait too long, they won’t be able to pull off the caper (at these valuations). And we will get to see who has staying power.
It’s obvious to me that all of OpenAIs announcements about partnerships and spending is gearing up for this. But I do wonder how Altman retains the momentum through to next year. What’s the next big thing? A rocket company?
Yeah, it started with the whole Wall Street, with all the depression and wars that it brought, and it hasn't stopped, at each cycle the curve has to go up, with exponential expectations of growth, until it explodes taking the world economy to the ground.
How do you guarantee your accelerationism produces the right results after the collapse? If the same systems of regulation and power are still in place then it would produce the same result afterwards
Pure greed would have a strong incentive to understand what the market is actually demanding in order to maximize profits.
These attempts to try to steer demand despite clear indicators that it doesn't want to go in that direction aren't just driven by greed, they're driven by abject incompetence.
Also, if the current level of AI investment and valuations aren't justified by market demand (I believe so), many of these people/companies are getting more money than they would without the unreasonable hype.
you seem to be committing the error of believing that the problem here is just that they’re not selling what people want to buy, instead of identifying the clear intention to _create_ the market.
> Pure greed would have a strong incentive to understand what the market is actually demanding in order to maximize profits.
Not necessarily, just look at this clip [1] from Margin Call, an excellent movie on the GFC. As Jeremy Irons is saying in that clip, the market (as usually understood in classical economy, with producers making things for clients/customers to purchase) is of no importance to today's market economy, almost all that matters, at the hundreds of billions - multi-trillion dollars-levels, is for your company "to play the music" as best as the other (necessarily very big) market participants, "nothing more, nothing less" (again, to quote Irons in that movie).
There's nothing in it about "making what people/customers want" and all that, which is regarded as accessory, that is if it is taken into consideration at all. As another poster is mentioning in this thread, this is all the direct result of the financialization of much of the Western economy, this is how things work at this level, given these (financiliazed) inputs.
Given that they aren’t meeting their sales targets at all, I guess that’s a little bit of encouraging about the discernment of their customers. I’m not sure how Microsoft has managed to escape market discipline for so long.
Not really. It's just that the point you have to push people to get them to start pushing back on something tends to be quite high. And it's very different for different people on different topics.
In the past this wasn't such a big deal because businesses weren't so large or so frequently run by myopic sociopaths. Ebenezer Scrooge was running some small local business, not a globe spanning empire entangling itself with government and then imposing itself on everybody and everything.
> So how to explain the current AI mania being widely promoted?
Probably individual actors have different motivations, but let's spitball for a second:
- LLMs are genuinely a revolution in natural language processing. We can do things now in that space that were unthinkable single-digit years ago. This opens new opportunity spaces to colonize, and some might turn out quite profitable. Ergo, land rush.
- Even if the new spaces are not that much of a value leap intrinsically, some may still end up obsoleting earlier-generation products pretty much overnight, and no one wants to be the next Nokia. Ergo, defensive land rush.
- There's a non-zero chance that someone somewhere will actually manage to build the tech up into something close enough to AGI to serve, which in essence means deprecating the labor class. The benefits (to that specific someone, anyway...) would be staggering enough to make that a goal worth pursuing even if the odds of reaching it are unclear and arguably quite low.
- The increasingly leveraged debt that's funding the land rush's capex needs to be paid off somehow and I'll venture everyone knows that the winners will possibly be able to, but not everyone will be a winner. In that scenario, you really don't want to be a non-winner. It's kind of like that joke where you don't need to outrun the lions, you only need to outrun the other runners, except in this case the harder everyone runs and the bigger the lions become. (Which is a funny thought now, sure, but the feasting, when it comes, will be a bloodbath.)
- A few, I'll daresay, have perhaps been huffing each other's farts too deep and too long and genuinely believe the words of ebullient enthusiasm coming out of their own mouths. That, and/or they think everyone's job except theirs is simple actually, and therefore just this close to being replaceable (which is a distinct flavor of fart, although coming from largely the same sources).
So basically the mania is for the most part a natural consequence of what's going on in the overlap of the tech itself and the incentive structure within which it exists, although this might be a good point to remember that cancer and earthquakes too are natural. Either way, take care of yourselves and each other, y'all, because the ride is only going to get bouncier for a while.
I think on some level it is being done on the premise that further advancement requires an enormous capital investment and if they can find a way to fund that with today’s sales it will give the opportunity for the tech to get there (quite a gamble).
I have a feeling that Microsoft is setting themselves up for a serious antitrust lawsuit if they do what they are intending on. They should really be careful about introducing products into the OS that take away from all other AI shops. I fear this would cripple innovation if allowed to do so as well, since Microsoft has drastically fatter wallets than most of their competition.
Corruption is indeed going strong in the current corporate-controlled US group of lame actors posing as government indeed. At the least Trump is now regularly falling asleep - that's the best example that you can use any surrogate puppet and the underlying policies will still continue.
It's not "pure greed." It's keeping up with the Joneses. It's fear.
There are three types of humans: mimics, amplifiers, originators. ~99% of the population are basic mimics, and they're always terrified - to one degree or another - of being out of step with the herd. The hyper mimicry behavior can be seen everywhere and at all times, from classrooms to Tiktok & Reddit to shopping behaviors. Most corporate leadership are highly effective mimics, very few are originators. They desperately herd follow ('nobody ever got fired for buying IBM').
This is the dotcom equivalent of every business must be e and @ ified (the advertising was aggressively targeted to that at the time). 1998-2000, you must be e ready. Your hotdog stand must have its own web site.
It's not just AI mania, it's been this way for over a decade.
When I first started consulting, organizations were afraid enough of lack of ROI in tech implementations that projects needed an economic justification in order to be approved.
Starting with cloud, leadership seemed so become rare, and everything was "us too!".
After cloud it was data/data visualization, then it was over-hiring during Covid, the it was RTO, and now it's AI.
I wonder if we will ever return to rationalization? The bellwether might be Tesla stock price (at a rational valuation).
US technocapitalism is built on the premise of technological innovation driving exponential growth. This is why they are fixated on whatever provides an outlook for that. The risk that it might not work out is downplayed, because (a) they don’t want to hazard not being at the forefront in the event that it does work out, and (b) if it doesn’t work out, nobody will really hold them accountable for it, not the least because everybody does it.
After the mobile and cloud revolution having run out of steam, AI is what promises most growth by far, even if it is a dubious promise.
It’s a gamble, a bet on “the next big thing”. Because they would never be satisfied with there not being another “big thing”, or not being prominently part of it.
I was just in a thread yesterday with someone who genuinely believed that we're only seeing the beginnings of what the current breed of AI will get us, and that it's going to be as transformative as the introduction of the internet was.
Everything about the conversation felt like talking to a true believer, and there's plenty out there.
It's the hopes and dreams of the Next Big Thing after blockchain and web3 fell apart and everyone is desperate to jump on the bandwagon because ZIRP is gone and everyone who is risk averse will only bet on what everyone else is betting on.
Thus, the cycle feeds itself until the bubble pops.
> "someone who genuinely believed that we're only seeing the beginnings of what the current breed of AI will get us, and that it's going to be as transformative as the introduction of the internet was."
I think that. It's new technology and it always takes some years before all the implications and applications of new technology are fully worked out. I also think that we're in a bubble that will hose a lot of people when it pops.
1) We have barely scratched the surface of what is possible to do with existing AI technology.
2) Almost all of the money we are spending on AI now is ineffectual and wasted.
---
If you go back to the late 1990s, that is the state that most companies were at with _computers_. Huge, wasteful projects that didn't improve productivity at all. It took 10 years of false starts sometimes to really get traction.
AI research has always been a series of occasional great leaps between slogs of iterative improvements, from Turing and Rosenblatt to AlexNet and GPT-3. The LLM era will result in a few things becoming invisible architecture* we stop appreciating and then the next big leap starts the hype cycle anew.
*Think toll booths (“exact change only!”) replaced by automated license plate readers in just the span of a decade. Hardly noticeable now.
It's not "fundamentally flawed". It is brilliant at what it does. What is flawed is how people are applying it to solve specific problems. It isn't a "do anything" button that you can just push. Every problem you apply AI to still has a ton of engineering work that needs to be done to make it useful.
> Every problem you apply AI to still has a ton of engineering work that needs to be done to make it useful.
Ok, but that isn't useful to me. If I have to hold the bot's hand to get stuff done, I'll just do it myself, which will be both faster and higher quality.
I thought this for a while, but I've also been thinking about all the stupid, false stuff that actual humans believe. I'm not sure AI won't get to a point where even if it's not perfect it's no worse than people are about selectively observing policies, having wrong beliefs about things, or just making something up when they don't know.
You’re correct, you need to learn how to use it. But for some reason HN has an extremely strong anti-AI sentiment, unless it’s about fundamental research.
At this point, I consider these AI tools to be an invaluable asset to my work in the same way that search engines are. It’s integrated into my work. But it takes practice on how to use it correctly.
My suspicion is because they (HN) are very concerned this technology is pushing hard into their domain expertise and feel threatened (and, rightfully so).
While it will suck when that happens (and inevitably it will), that time is not now. I'm not one to say LLMs are useless, but they aren't all they're being marketed to be.
Hearing similar stories play out elsewhere too with targets being missed left and right.
There’s definitely something there with AI but a giant chasm between reality and the sales expectations on what’s needed to make the current financial engineering on AI make any sense.
The difference between poison and medicine is the amount. AI is great and very useful, but they want the AI to replace you instead of supporting your needs.
"AI everywhere" is worse than "AI nowhere". What we need is "AI somewhere".
That's what we had before LLMs. Without the financially imposed contrivance of it needing to be used everywhere, it was free to be used where it made sense.
Even Devblogs and anything related to Java,.NET, C++ and Python out of Redmond seems to be all around AI and anything else are now low priority tickets on their roadmaps.
But is it sold enough to regular Windows Home users? If MS brings an ultimatum: "you need to buy AI services to use Windows", they might get a bunch more clueless subscribers. In the same way as there's no ability to set up Windows without internet connection and MS account they could make it mandatory to subscribe to Copilot.
I think Microsoft's long-term plan is exactly that: to make Windows itself a subscription product. Windows 12 Home for $4.99 a month, Copilot included. It will be called OSaaS.
> In the same way as there's no ability to set up Windows without internet connection and MS account
Not true. They're clearly unwilling or unable to remove this code path fully, or they would have done so by now. There's just a different workaround for it every few years.
Super interesting how this arc has played out for Microsoft. They went from having this massive advantage in being an early OpenAI partner with early access to their models to largely losing the consumer AI space: Copilot is almost never mentioned in the same breath as Claude and ChatGPT. Though I guess their huge stake in OpenAI will still pay out massively from a valuation perspective.
Microsoft seems to be actively discarding the consumer PC market for Windows. It's gamers and enterprise, it seems. Enterprise users don't get a lot of say in what's on their desktop.
It wants to help create things in Office documents, I imagine just saving you the copy and paste from the app or web form. The one thing I tried to get it to do was to take a spreadsheet of employees and add a column with their office numbers (it has access to the company directory). The response was something like "here's how you would look up a office number, you're welcome!"
It is functional at RAG stuff on internal docs but definitely not good - not sure how much of this is Copilot vs corporate disarray and access controls.
It won't send emails for me (which I would think is the agentic mvp) but that is likely a switch my organization daren't turn on.
Tldr it's valuable as a normal LLM, very limited as a add-on to Microsoft's software ecosystem.
Despite having an unlimited warchest I'm not expecting Microsoft to come out as a winner from this AI race whilst having the necessary resources. The easy investment was to throw billions at OpenAI to gain access to their tech, but that puts them in a weird position of not investing heavily in cultivating their own AI talent and being in control of their own destiny by having their own horse in the race with their own SOTA models.
Apple's having a similar issue, unlimited wealth that's outsourcing to external SOTA model providers.
Blaming slow sales on salespeople is almost always a scapegoat. Reality is that either the product sells or it doesn’t.
Not saying that sales is useless, far from it. But with an established product that people know about, the sales team is more of a conduit than they are a resource-gathering operation.
As someone who appreciates machine learning, the main dissonance I have with interacting with Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you".
This appears everywhere, with every tool trying to autocomplete every sentence and action, creating a very clunky ecosystem where I am constantly pressing 'escape' and 'backspace' to undo some action that is trying to rewrite what I am doing to something I don't want or didn't intend.
It is wasting time and none of the things I want are optimized, their tools feel like they are helping people write "good morning team, today we are going to do a Business, but first we must discuss the dinner reservations" emails.
I broadly agree. They package "copilot" in a way that constantly gets in your way.
The one time I thought it could be useful, in diagnosing why two Azure services seemingly couldn't talk to each other, it was completely useless.
I had more success describing the problem in vague terms to a different LLM, than an AI supposedly plugged into the Azure organisation that could supposedly directly query information.
>As someone who appreciates machine learning, the main dissonance I have with interacting with Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you".
This the nightmare scenario with AI, ie people settling for Microsoft/OpenAI et al to do the "thinking" for you.
It is alluring but of course it is not going to work. It is similar to what happened to the internet via social media, ie "kickback and relax, we'll give you what you really want, you don't have to take any initiative".
My pitch against this is to vehemently resist the chatbot-style solutions/interfaces and demand intelligent workspaces:
https://codesolvent.com/botworx/intelligent-workspace/
> ...Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you"
I feel like that describes nearly all of the "productivity" tools I see in AI ads. Sadly enough, it also aligns with how most people use it, in my personal experience. Just a total off-boarding of needing to think.
Too many companies have bolted AI on to their existing products with the value-prop Let us do the work (poorly) for you.
That's because in its current form, that's all it's good for reliably. Can't sell that it might hallucinate the numbers in the Q4 report
Dissonance runs straight through from top of the org chart.
https://x.com/satyanadella/status/1996597609587470504
Just 22 hours ago... https://news.ycombinator.com/item?id=46138952
AI agent technology likely isn’t ready for the kind of high-stakes autonomous business work Microsoft is promising.
It's unbelievable to me that tech leaders lack the insight to recognize this.
So how to explain the current AI mania being widely promoted?
I think the best fit explanation is simple con artistry. They know the product is fundamentally flawed and won't perform as being promised. But the money to be made selling the fantasy is simply too good to ignore.
In other words --- pure greed. Over the longer term, this is a weakness, not a strength.
Don’t attribute to malice that which can equally be contributed to incompetence.
I think you’re over-estimating the capabilities of these tech leaders, especially when the whole industry is repeating the same thing. At that point, it takes a lot of guts to say “No, we’re not going to buy into the hype, we’re going to wait and see” because it’s simply a matter of corporate politics: if AI fails to deliver, it fails to deliver for everyone and the people that bought into the hype can blame the consultants / whatever.
If, however, AI ended up delivering and they missed the boat, they’re going to be held accountable.
It’s much less risky to just follow industry trends. It takes a lot of technical knowledge, gut, and confidence in your own judgement to push back against an industry-wide trend at that level.
I suspect that AI is in an "uncanny valley" where it is definitely good enough for some demos, but will fail pretty badly when deployed.
If it works 99% of the time, then a demo of 10 runs is 90% likely to succeed. Even if it fails, as long as it's not spectacular, you can just say "yeah, but it's getting better every day!", and "you'll still have the best 10% of your human workers in the loop".
When you go to deploy it, 99% is just not good enough. The actual users will be much more noisy than the demo executives and internal testers.
When you have a call center with 100 people taking 100 calls per day, replacing those 10,000 calls with 99% accurate AI means you have to clean up after 100 bad calls per day. Some percentage of those are going to be really terrible, like the AI did reputational damage or made expensive legally binding promises. Humans will make mistakes, but they aren't going to give away the farm or say that InsuranceCo believes it's cheaper if you die. And your 99% accurate-in-a-lab AI isn't 99% accurate in the field with someone with a heavy accent on a bad connection.
So I think that the parties all "want to believe", and to an untrained eye, AI seems "good enough" or especially "good enough for the first tier".
Agreed, but 99% is being very generous.
> if AI fails to deliver, it fails to deliver for everyone and the people that bought into the hype can blame the consultants / whatever.
Understatement of the year. At this point, if AI fails to deliver, the US economy is going to crash. That would not be the case if executives hadn't bought in so hard earlier on.
Race to "Too big to fail" on hype and your losses are socialized
And if it does deliver, everyone's gonna be out of a job and the US economy is also going to crash.
Nice cul-de-sac our techbro leaders have navigated us into.
> Don’t attribute to malice that which can equally be contributed to incompetence.
At this point I think it might actually be both rather than just one or the other.
Ultimately it's a distinction without a difference. Maliciously stupid or stupidly malicious invariably leads to the same place.
The discussion we should be having is how we can come together to remove people from power and minimize the influence they have on society.
We don't have the carbon budget to let billionaires who conspires from island fortresses in Hawaii do this kind of reckless stuff.
It's so dismaying to see these industries muster the capital and political resources to make these kinds of infrastructure projects a reality when they've done nothing comparable w.r.t to climate change.
It tells me that the issue around the climate has always been a lack of will not ability.
It's mass delusion
It's part of a larger economic con centered on the financial industry and the financialization of American industry. If you want this stuff to stop, you have to be hoping (or even working toward) a correction that wipes out the incumbents who absolutely are working to maintain the masqerade.
It will hurt, and they'll scare us with the idea that it will hurt, but the secret is that we get to choose where it hurts - the same as how they've gotten to choose the winners and losers for the past two decades.
Agreed! I recently listened to a podcast (video) from the "How Money Works" channel on this topic:
"How Short Term Thinking Won" - https://youtu.be/qGwU2dOoHiY
The author argues that this con has been caused by three relatively simple levers: Low dividend yields, legalization of stock buybacks, and executive compensation packages that generate lots of wealth under short pump-and-dump timelines.
If those are the causes, then simple regulatory changes to make stock buybacks illegal again, limit the kinds of executive compensation contracts that are valid, and incentivize higher dividend yields/penalize sales yields should return the market to the previous long-term-optimized behavior.
I doubt that you could convince the politicians and financiers who are currently pulling value out of a fragile and inefficient economy under the current system to make those changes, and if the changes were made I doubt they could last or be enforced given the massive incentives to revert to our broken system. I think you're right that it will take a huge disaster that the wealthy and powerful are unable to dodge and unable to blame on anything but their own actions, I just don't know what that event might look like.
> correction that wipes out the incumbents who absolutely are working to maintain the masqerade
You need to also have a robust alternative that grows quickly in the cleared space. In 2008 we got a correction that cleared the incumbents, but the ensuing decade of policy choices basically just allowed the thing to re-grow in a new form.
I thought we pretty explicitly bailed out most of the incumbents. A few were allowed to be sacrificed, but most of the risk wasn't realized, and instead rolled into new positions that diffused it across the economy. 2008's "correction" should have seen the end of most of our investment banks and auto manufacturers. Say what you want to about them (and I have no particular love for either), but Tesla and Bitcoin are ghosts of the timeline where those two sectors had to rebuild themselves from scratch. There should have been more, and Goldman Sachs and GM et al. should not currently exist.
> A few were allowed to be sacrificed, but most of the risk wasn't realized, and instead rolled into new positions that diffused it across the economy.
Yeah that's a more accurate framing, basically just saying that in '08 we put out the fire and rehabbed the old growth rather than seeding the fresh ground.
> Tesla and Bitcoin are ghosts of the timeline where those two sectors had to rebuild themselves from scratch
I disagree, I think they're artifacts of the rehab environment (the ZIRP policy sphere). I think in a world where we fully ate the loss of '08 and started in a new direction you might get Tesla, but definitely not TSLA, and the version we got is really (Tesla+TSLA) IMO. Bitcoin to me is even less of a break with the pre-08 world; blockchain is cool tech but Bitcoin looks very much "Financial Derivatives, Online". I think an honest correction to '08 would have been far more of a focus on "hard tech and value finance", rather than inventing new financial instruments even further distanced from the value-generation chain.
> Goldman Sachs and GM et al. should not currently exist.
Hard agree here
Looking forward to the OpenAI (and Anthropic) IPOs. It’s funny to me that this info is being “leaked” - they are sussing out the demand. If they wait too long, they won’t be able to pull off the caper (at these valuations). And we will get to see who has staying power.
It’s obvious to me that all of OpenAIs announcements about partnerships and spending is gearing up for this. But I do wonder how Altman retains the momentum through to next year. What’s the next big thing? A rocket company?
Increasing signs the ship has sailed on the IPO window for these folks but let’s see.
Hell yes! Would love to short.
Yeah, it started with the whole Wall Street, with all the depression and wars that it brought, and it hasn't stopped, at each cycle the curve has to go up, with exponential expectations of growth, until it explodes taking the world economy to the ground.
How do you guarantee your accelerationism produces the right results after the collapse? If the same systems of regulation and power are still in place then it would produce the same result afterwards
It's like when a child doesn't want something, you "give them a choice": would you like to put on your red or white shoes?
This assumes fair competition in the tech industry, which has evaporated without a path for return years ago.
> In other words --- pure greed.
Pure greed would have a strong incentive to understand what the market is actually demanding in order to maximize profits.
These attempts to try to steer demand despite clear indicators that it doesn't want to go in that direction aren't just driven by greed, they're driven by abject incompetence.
This isn't pure greed, it's stupid greed.
Pure greed is stupid greed.
Also, if the current level of AI investment and valuations aren't justified by market demand (I believe so), many of these people/companies are getting more money than they would without the unreasonable hype.
you seem to be committing the error of believing that the problem here is just that they’re not selling what people want to buy, instead of identifying the clear intention to _create_ the market.
> Pure greed would have a strong incentive to understand what the market is actually demanding in order to maximize profits.
Not necessarily, just look at this clip [1] from Margin Call, an excellent movie on the GFC. As Jeremy Irons is saying in that clip, the market (as usually understood in classical economy, with producers making things for clients/customers to purchase) is of no importance to today's market economy, almost all that matters, at the hundreds of billions - multi-trillion dollars-levels, is for your company "to play the music" as best as the other (necessarily very big) market participants, "nothing more, nothing less" (again, to quote Irons in that movie).
There's nothing in it about "making what people/customers want" and all that, which is regarded as accessory, that is if it is taken into consideration at all. As another poster is mentioning in this thread, this is all the direct result of the financialization of much of the Western economy, this is how things work at this level, given these (financiliazed) inputs.
[1] https://www.youtube.com/watch?v=UOYi4NzxlhE
They've gotten away with shipping garbage for years and still getting paid for it. They think we're all stupid.
Given that they aren’t meeting their sales targets at all, I guess that’s a little bit of encouraging about the discernment of their customers. I’m not sure how Microsoft has managed to escape market discipline for so long.
They think we're all stupid.
As time goes by, I'm starting to think they may be right more than they're wrong.
And this is a sad and depressing statement about humanity.
Not really. It's just that the point you have to push people to get them to start pushing back on something tends to be quite high. And it's very different for different people on different topics.
In the past this wasn't such a big deal because businesses weren't so large or so frequently run by myopic sociopaths. Ebenezer Scrooge was running some small local business, not a globe spanning empire entangling itself with government and then imposing itself on everybody and everything.
> So how to explain the current AI mania being widely promoted?
Probably individual actors have different motivations, but let's spitball for a second:
- LLMs are genuinely a revolution in natural language processing. We can do things now in that space that were unthinkable single-digit years ago. This opens new opportunity spaces to colonize, and some might turn out quite profitable. Ergo, land rush.
- Even if the new spaces are not that much of a value leap intrinsically, some may still end up obsoleting earlier-generation products pretty much overnight, and no one wants to be the next Nokia. Ergo, defensive land rush.
- There's a non-zero chance that someone somewhere will actually manage to build the tech up into something close enough to AGI to serve, which in essence means deprecating the labor class. The benefits (to that specific someone, anyway...) would be staggering enough to make that a goal worth pursuing even if the odds of reaching it are unclear and arguably quite low.
- The increasingly leveraged debt that's funding the land rush's capex needs to be paid off somehow and I'll venture everyone knows that the winners will possibly be able to, but not everyone will be a winner. In that scenario, you really don't want to be a non-winner. It's kind of like that joke where you don't need to outrun the lions, you only need to outrun the other runners, except in this case the harder everyone runs and the bigger the lions become. (Which is a funny thought now, sure, but the feasting, when it comes, will be a bloodbath.)
- A few, I'll daresay, have perhaps been huffing each other's farts too deep and too long and genuinely believe the words of ebullient enthusiasm coming out of their own mouths. That, and/or they think everyone's job except theirs is simple actually, and therefore just this close to being replaceable (which is a distinct flavor of fart, although coming from largely the same sources).
So basically the mania is for the most part a natural consequence of what's going on in the overlap of the tech itself and the incentive structure within which it exists, although this might be a good point to remember that cancer and earthquakes too are natural. Either way, take care of yourselves and each other, y'all, because the ride is only going to get bouncier for a while.
Thing is, it's hard to predict what can be done and what breakthrough or minor tweak can suddenly open up an avenue for a profitable use-case.
The cost of missing that opportunity is why they're heavily investing in AI, they don't want to miss the boat if there's going to be one.
And what else would they do? What's the other growth path?
> And what else would they do? What's the other growth path?
Are you arguing that if LLMs didn’t exist as a technology, they wouldn’t find anything to do and collapse?
this idea that AI is the only thing anyone could possibly do that might be useful has absolutely got to go
I think on some level it is being done on the premise that further advancement requires an enormous capital investment and if they can find a way to fund that with today’s sales it will give the opportunity for the tech to get there (quite a gamble).
I have a feeling that Microsoft is setting themselves up for a serious antitrust lawsuit if they do what they are intending on. They should really be careful about introducing products into the OS that take away from all other AI shops. I fear this would cripple innovation if allowed to do so as well, since Microsoft has drastically fatter wallets than most of their competition.
There's no such thing as antitrust in the US right now. Google's recent slap on the wrist is all the proof you need.
Under the current US administration the only thing Microsoft is getting is numerous piles of taxpayer bailouts.
Corruption is indeed going strong in the current corporate-controlled US group of lame actors posing as government indeed. At the least Trump is now regularly falling asleep - that's the best example that you can use any surrogate puppet and the underlying policies will still continue.
> So how to explain the current AI mania being widely promoted?
> I think the best fit explanation is simple con artistry.
Yes, perhaps, but many industries are built on a little bit of technology and a lot of stories.
I think of it as us all being caught in one giant infomercial.
Meanwhile as long as investors buy the hype it's a great story to use for triming payrolls.
Fake it till you make it.
outside of the recovery community, this is known as 'fraud'
It's not "pure greed." It's keeping up with the Joneses. It's fear.
There are three types of humans: mimics, amplifiers, originators. ~99% of the population are basic mimics, and they're always terrified - to one degree or another - of being out of step with the herd. The hyper mimicry behavior can be seen everywhere and at all times, from classrooms to Tiktok & Reddit to shopping behaviors. Most corporate leadership are highly effective mimics, very few are originators. They desperately herd follow ('nobody ever got fired for buying IBM').
This is the dotcom equivalent of every business must be e and @ ified (the advertising was aggressively targeted to that at the time). 1998-2000, you must be e ready. Your hotdog stand must have its own web site.
It is not greed-driven, it's fear-driven.
> In other words --- pure greed.
It's the opposite; it's FOMO.
It's not just AI mania, it's been this way for over a decade.
When I first started consulting, organizations were afraid enough of lack of ROI in tech implementations that projects needed an economic justification in order to be approved.
Starting with cloud, leadership seemed so become rare, and everything was "us too!".
After cloud it was data/data visualization, then it was over-hiring during Covid, the it was RTO, and now it's AI.
I wonder if we will ever return to rationalization? The bellwether might be Tesla stock price (at a rational valuation).
Imagine your supplier effectively telling you that they don't even value you (and your money) enough to bother a real human.
US technocapitalism is built on the premise of technological innovation driving exponential growth. This is why they are fixated on whatever provides an outlook for that. The risk that it might not work out is downplayed, because (a) they don’t want to hazard not being at the forefront in the event that it does work out, and (b) if it doesn’t work out, nobody will really hold them accountable for it, not the least because everybody does it.
After the mobile and cloud revolution having run out of steam, AI is what promises most growth by far, even if it is a dubious promise.
It’s a gamble, a bet on “the next big thing”. Because they would never be satisfied with there not being another “big thing”, or not being prominently part of it.
Riding hype waves forever is the most polar opposite thing to “sustainable” that I can imagine
I was just in a thread yesterday with someone who genuinely believed that we're only seeing the beginnings of what the current breed of AI will get us, and that it's going to be as transformative as the introduction of the internet was.
Everything about the conversation felt like talking to a true believer, and there's plenty out there.
It's the hopes and dreams of the Next Big Thing after blockchain and web3 fell apart and everyone is desperate to jump on the bandwagon because ZIRP is gone and everyone who is risk averse will only bet on what everyone else is betting on.
Thus, the cycle feeds itself until the bubble pops.
> "someone who genuinely believed that we're only seeing the beginnings of what the current breed of AI will get us, and that it's going to be as transformative as the introduction of the internet was."
I think that. It's new technology and it always takes some years before all the implications and applications of new technology are fully worked out. I also think that we're in a bubble that will hose a lot of people when it pops.
Two things can be true:
1) We have barely scratched the surface of what is possible to do with existing AI technology. 2) Almost all of the money we are spending on AI now is ineffectual and wasted.
---
If you go back to the late 1990s, that is the state that most companies were at with _computers_. Huge, wasteful projects that didn't improve productivity at all. It took 10 years of false starts sometimes to really get traction.
All these boosters think we're on the leading edge of an exponential, when it's way more likely that we're on the midpoint to tail of a logistic
AI research has always been a series of occasional great leaps between slogs of iterative improvements, from Turing and Rosenblatt to AlexNet and GPT-3. The LLM era will result in a few things becoming invisible architecture* we stop appreciating and then the next big leap starts the hype cycle anew.
*Think toll booths (“exact change only!”) replaced by automated license plate readers in just the span of a decade. Hardly noticeable now.
They want to exfiltrate the customers' data under the guise of getting better "AI" responses.
No company or government in the EU should use this spyware.
It was the same with the cloud adoption. And I still think that cloud is expensive, wasteful and in the vast majority of cases not needed.
It's not "fundamentally flawed". It is brilliant at what it does. What is flawed is how people are applying it to solve specific problems. It isn't a "do anything" button that you can just push. Every problem you apply AI to still has a ton of engineering work that needs to be done to make it useful.
> Every problem you apply AI to still has a ton of engineering work that needs to be done to make it useful.
Ok, but that isn't useful to me. If I have to hold the bot's hand to get stuff done, I'll just do it myself, which will be both faster and higher quality.
I'd consider hallucinations to be a fundamental flaw that currently sets hard limits on the current utility of LLMs in any context.
I thought this for a while, but I've also been thinking about all the stupid, false stuff that actual humans believe. I'm not sure AI won't get to a point where even if it's not perfect it's no worse than people are about selectively observing policies, having wrong beliefs about things, or just making something up when they don't know.
You’re correct, you need to learn how to use it. But for some reason HN has an extremely strong anti-AI sentiment, unless it’s about fundamental research.
At this point, I consider these AI tools to be an invaluable asset to my work in the same way that search engines are. It’s integrated into my work. But it takes practice on how to use it correctly.
> for some reason HN has an extremely strong anti-AI sentiment
It's because I've used it and it doesn't come even close to delivering the value that its advocates claim it does. Nothing mysterious about it.
My suspicion is because they (HN) are very concerned this technology is pushing hard into their domain expertise and feel threatened (and, rightfully so).
While it will suck when that happens (and inevitably it will), that time is not now. I'm not one to say LLMs are useless, but they aren't all they're being marketed to be.
Or they might know better than you. A painful idea.
If you click through to the article shared yesterday[0]:
> Microsoft denies report of lowering targets for AI software sales growth
This Ars Technica article cites the same reporting as that Reuters piece but doesn't (yet) include anything about MSFT's rebuttal.
[0]: https://news.ycombinator.com/item?id=46135388
Hearing similar stories play out elsewhere too with targets being missed left and right.
There’s definitely something there with AI but a giant chasm between reality and the sales expectations on what’s needed to make the current financial engineering on AI make any sense.
The difference between poison and medicine is the amount. AI is great and very useful, but they want the AI to replace you instead of supporting your needs.
"AI everywhere" is worse than "AI nowhere". What we need is "AI somewhere".
That's what we had before LLMs. Without the financially imposed contrivance of it needing to be used everywhere, it was free to be used where it made sense.
I wonder if it’s because Microsoft is hyper focused on a bunch of crap people don’t want or need?
Even Devblogs and anything related to Java,.NET, C++ and Python out of Redmond seems to be all around AI and anything else are now low priority tickets on their roadmaps.
No wonder there is this exhaustion.
But is it sold enough to regular Windows Home users? If MS brings an ultimatum: "you need to buy AI services to use Windows", they might get a bunch more clueless subscribers. In the same way as there's no ability to set up Windows without internet connection and MS account they could make it mandatory to subscribe to Copilot.
I think Microsoft's long-term plan is exactly that: to make Windows itself a subscription product. Windows 12 Home for $4.99 a month, Copilot included. It will be called OSaaS.
> In the same way as there's no ability to set up Windows without internet connection and MS account
Not true. They're clearly unwilling or unable to remove this code path fully, or they would have done so by now. There's just a different workaround for it every few years.
Super interesting how this arc has played out for Microsoft. They went from having this massive advantage in being an early OpenAI partner with early access to their models to largely losing the consumer AI space: Copilot is almost never mentioned in the same breath as Claude and ChatGPT. Though I guess their huge stake in OpenAI will still pay out massively from a valuation perspective.
Microsoft seems to be actively discarding the consumer PC market for Windows. It's gamers and enterprise, it seems. Enterprise users don't get a lot of say in what's on their desktop.
What can you even do in the ms enterprise ecosystem with their copilot integration?
Is it just for chatting? Is it a glorified RAG?
Can you tell copilot co to create a presentation? Make a visualisation in a spreadsheet?
It wants to help create things in Office documents, I imagine just saving you the copy and paste from the app or web form. The one thing I tried to get it to do was to take a spreadsheet of employees and add a column with their office numbers (it has access to the company directory). The response was something like "here's how you would look up a office number, you're welcome!"
It is functional at RAG stuff on internal docs but definitely not good - not sure how much of this is Copilot vs corporate disarray and access controls.
It won't send emails for me (which I would think is the agentic mvp) but that is likely a switch my organization daren't turn on.
Tldr it's valuable as a normal LLM, very limited as a add-on to Microsoft's software ecosystem.
Despite having an unlimited warchest I'm not expecting Microsoft to come out as a winner from this AI race whilst having the necessary resources. The easy investment was to throw billions at OpenAI to gain access to their tech, but that puts them in a weird position of not investing heavily in cultivating their own AI talent and being in control of their own destiny by having their own horse in the race with their own SOTA models.
Apple's having a similar issue, unlimited wealth that's outsourcing to external SOTA model providers.
Have we finally reached peak AI already? In that event we will see the falling down phase next.
AI is people looking at EV hype and saying - I'll 100x it.
It has all the same components, just on much higher scale:
1. Billionaire con-man convincing large part of market and industry (Altman in AI vs Musk in EV) that new tech will take over in few years.
2. Insane valuations not supported by an actual ROI.
3. Very interesting and amazing underlying technology.
4. Governments jumping on the hype and enabling it.
The valuations are based on value, not revenue.
I went to Ignite a few weeks ago, and the theme of the event and most talks was "look at how we're leveraging AI in this product to add value".
Separately, the theme from talking to Every. Single. Person on the buy-side was gigantic eye roll yes I cant wait for AI to solve all my problems.
Companies I support are being directed from their presidents to use ai, literally a solution in search of a problem.
Top signal. Phase transition is imminent.
Lol "Microsoft can't make something work ergo the technology is not feasible".
Blaming slow sales on salespeople is almost always a scapegoat. Reality is that either the product sells or it doesn’t.
Not saying that sales is useless, far from it. But with an established product that people know about, the sales team is more of a conduit than they are a resource-gathering operation.
> Reality is that either the product sells or it doesn’t.
Why do people use this useless phrase template?
Yeah, the point is that it's not selling, and it's not selling because people are getting increasingly skeptical about its actual value.
Is "The Information" credible? It's the sole source.
[dupe] https://news.ycombinator.com/item?id=46135388
made up story