How do you suggest to deal with Gemini? Extremely useful to understand whether something is worrying or not. Whether we like it or not, it’s a main participant to the discussion.
Apparently we should hire the Guardian to evaluate LLM output accuracy?
Why are these products being put out there for these kinds of things with no attempt to quantify accuracy?
In many areas AI has become this toy that we use because it looks real enough.
It sometimes works for some things in math and science because we test its output, but overall you don't go to Gemini and it says "there's a 80% chance this is correct". At least then you could evaluate that claim.
There's a kind of task LLMs aren't well suited to because there's no intrinsic empirical verifiability, for lack of a better way of putting it.
This "random output machine" is already in large use in medicine so why exactly not? Should I trust the young doctor fresh out of the Uni more by default or should I take advises from both of them with a grain of salt? I had failures and successes with both of them but lately I found Gemini to be extremely good at what it does.
There's a difference between a doctor (an expert in their field) using AI (specialising in medicine) and you (a lay person) using it to diagnose and treat yourself. In the US, it takes at least 10 years of studying (and interning) to become a doctor.
With robust fines based on % revenue whenever it breaks the law, would be my preference. I'm nit here to attempt solutions to Google's self-inflicted business-model challenges.
Not surprised. Another example is minecraft related queries. Im searching with the intention of eventually going to a certain wiki page at minecraft.wiki, but started to just read the summaries instead. It will combine fan forums discussing desired features/ideas with the actual game bible at minecraft.wiki - so it mixes one source of truth with one source of fantasy. Results in ridiculous inaccurate summaries.
A few months ago in a comment here on HN I speculated about the reason an old law might have been written the way it was, instead of more generally. If it had been written without the seemingly arbitrary restrictions it included there would have been no need for the new law that the thread was about.
A couple hours later I decided to ask an LLM if it could tell me. It quickly answered, giving the same reason that I had guessed in my HN comment.
I then clicked the two links it cited as sources. One was completely irrelevant. The other was a link to my HN comment.
I had a similar thing happen to me just today. A friend of mine had finished a book in a series. I have read the series but it was almost 10 years ago, and I needed a refresher with spoilers, so I went looking.
Well, some redditor had posted a comparison of a much later book in the series, and drawn all sorts of parallels and foreshadowing and references between this quite early book I was looking for and the much later one. It was an interesting post so it had been very popular.
The AI summary completely confused the two books because of this single reddit post, so the summary I got was hopelessly poisoned with plot points and characters that wouldn't show up until nearly the conclusion. It simply couldn't tell which book was which. It wasn't quite as ridiculous as having, say, Anakin Skywalker face Kylo Ren in a lightsaber duel, but it was definitely along those same lines of confusion.
Fortunately, I finished the later book recently enough to remember it, but it was like reading a fever dream.
At some point in time when asked how many Kurdish people live in Poland, Google's AI would say that several million, which was true, but only in a fantasy world conjured by a certain LARP group, who put a wiki on fandom.com.
Yeah, it happened recently for a kubernetes resource. I was searching for how to do something, and Google AI helpfully showed me a kubernetes resource that was exactly what I needed, and was designed to work exactly how I needed it.
Sadly, the resource didn't actually exist. It would have been perfect if it did, though!
I find its tricky with games, especially ones as updated as frequently as Minecraft over the years. I've had some of this trouble with OSRS. It brings in old info, or info from a League/Event that isn't relevant. Easier to just go to the insanely curated wiki.
What's interesting to me is that this kind of behavior -- slightly-buffleheaded synthesis of very large areas of discourse with widely varying levels of reliability/trustworthiness -- is actually sort of one of the best things about AI research, at least for me?
I'm pretty good at reading the original sources. But what I don't have in a lot of cases is a gut that tells me what's available. I'll search for some vague idea (like, "someone must have done this before") with the wrong jargon and unclear explanation. And the AI will... sort of figure it out and point me at a bunch of people talking about exactly the idea I just had.
Now, sometimes they're loons and the idea is wrong, but the search will tell me who the players are, what jargon they're using to talk about it, what the relevant controversies around the ideas are, etc... And I can take it from there. But without the AI it's actually a long road between "I bet this exists" and "Here's someone who did it right already".
Yeah, this is by far the biggest value I've gotten from LLMs - just pointing me to the area of literature neither me nor any of my friends have heard of, but which have spent a decade running about the problems we're running into.
In this case, all that matters is that the outputs aren't complete hallucination. Once you know the magic jargon, everything opens up easily with traditional search.
I run a small business that buys from one of two suppliers of the items we need. The supplier has a TRASH website search feature. It's quicker to Google it.
Now that AI summaries exist, I have to scroll past half a page of result and nonsense about a Turkish oil company before I find the item I'm looking for.
I hate it. It's such a minor inconvenience, but it's just so annoying. Like a sore tooth.
Or you can take the alternative approach, where Microsoft's own "Merl" support agent says it knows anything to do with Minecraft, and then replies to basically any gameplay question with "I don't know that".
"Dangerous and Alarming" - it tough; healthcare is needs disruption but unlike many places to target for disruption, the risk is life and death. It strikes me that healthcare is a space to focus on human in the loop applications and massively increasing the productivity of humans, before replacing them...
https://deadstack.net/cluster/google-removes-ai-overviews-fo...
Many people don’t get it, it’s really expensive, even in countries with non broken healthcare systems (not the us) costs increase rapidly and no one is sure how the systems will remain solvent with the same level of care given today. The way things are currently done are entrenched but not sustainable, that’s when disruptions are apt to appear.
I mean, if we're talking Christensenian disruption, that happens in neglected markets rather than currently dysfunctional ones. There's no shortage of actors wronging money out of health care so there's not a disruptable space per se.
The solution is single payer. Any attempt to solve this with technological band aids is completely futile. We know what the solution is because we see it work in every other developed nation. We don't have it because a class of billionaire doners doesn't want to pay into the system that allowed them to become fabulously wealthy. People who are claiming AI is the solution to healthcare access and affordability are delusional or lying to you.
There are good reasons to think single payer systems are not the answer. The numerous documented inefficiencies and inconveniences they suffer from don't need repeating here.
And many single payer systems around the world only appear to work as well as they do because the US effectively subsidizes medical costs through its own out of control prices.
IDK, the owners of retail clothing chains buy yachts and yet that sector is jaw-droppingly efficient at delivering clothes to people. Executives can be annoying tools but I don't think their pay is the problem.
Compare: Google's founders can buy all the yachts they could possibly eat, yet Google Searches are offered for free.
If we could get healthcare to that level, it would be great.
For a less extreme example: Wal-Mart and Amazon have made plenty of people very rich, and they charge customers for their goods; but their entrance into the markets have arguable brought down prices.
> Google's founders can buy all the yachts they could possibly eat, yet Google Searches are offered for free.
Google searches cost many billions of dollars: your confusion is because the customer isn’t the person searching but the advertisers paying to influence them. Healthcare can’t work like that not just because the real costs are both much higher and resistant to economies of scale but, critically, there aren’t people with deep pockets lining up to pay for you to be healthy. That’s why every other developed country sees better results for less money: keeping people healthy is a social good, and political forces work for that better than raw economic incentives.
We know that from observing evidence such as how much the government pays out in welfare to Wal-Mart employees.
Customers continue shopping there because human beings are typically incapable of accepting a short-term loss (higher price) for a long-term gain (product lasts more than three uses).
And Google search, a service on the level of a public utility, has been degrading noticeably for years in the face of shareholders demanding more and more returns.
Comparing something to a public utility is not me saying it's literally a public utility. Google runs a monopolistic service that is essential to a lot of our public life, in a segment that has high cost of entry and infrastructure cost. They make the service worse to make more money. It should be a regulated utility like electricity or railroads, we should have a public alternative like the post office is to UPS, or it should be nationalized. The situation gets more dire when you consider their browser monopoly.
Because insurance companies incentivize upward price momentum. The ones who innovate and bring the prices down are not rewarded for their efforts. Health inflation is higher than headline inflation because of this absence of price pressure
I sympthatise with the argument. We should test it against real world data.
Eg your argument would predict that healthcare price inflation is not as bad in areas with less insurance coverage. Eg for dental work (which is less often covered as far as I can tell), for (vanity) plastic surgery, or we can even check healthcare price inflation for vet care for pets.
Dental and vanity surgeries aren't happening in a vacuum. There are baseline costs eg. anesthesia, recovery medications, medical machinery etc which are all bloated due to the rest of industry not being under price pressure (rising tide lift all boats)
It's similar to how AI data center buildout race is raising the prices for consumer electronics in 2026 and beyond. The suppliers have no incentive to sell lower cost products to tiny niche
I just looked it up, and apparently health care costs for pets has gone up in price even faster than for humans.
Pets typically don't have medical insurance, and any insurance that does exist there has a radically different regulatory regime than for humans.
Since 1980 for the US:
CPI has gone up by 3.16% on average per year (x4.17 in total). Human healthcare costs by 4.9% per year (x8.96 in total). And pet healthcare costs by 6.49% (or x17.87 in total).
This argument doesn’t make sense to me. Insurance companies are structurally incentivized to minimize payouts across the board. They want hospital bills lower, physician compensation lower, and patient payouts as small as possible. If insurers had unilateral power, total medical spending would collapse, not explode.
The real source of high medical costs is the entity that sets the hospital bill in the first place.
The explanation is much simpler than people want to admit, but emotionally uncomfortable: doctors and hospitals are paid more than the free market would otherwise justify. We hesitate to say this because they save lives, and we instinctively conflate moral worth with economic compensation. But markets don’t work that way.
Economics does not reward people based on what they “deserve.” It rewards scarcity. And physician labor is artificially scarce.
The supply of doctors is deliberately constrained. We are not operating in a free market here. Entry into the profession is made far more restrictive than is strictly necessary, not purely for safety, but to protect incumbents. This is classic supply-side restriction behavior, bordering on cartel dynamics.
We see similar behavior in law, but medicine is more insidious. Because medical practice genuinely requires guardrails to prevent harm and quackery, credentialing is non-negotiable. That necessity makes it uniquely easy to smuggle in protectionism under the banner of “safety.”
The result is predictable: restricted supply, elevated wages, and persistently high medical costs. The problem isn’t mysterious, and it isn’t insurance companies. It’s a supply bottleneck created and defended by the profession itself.
Insurance companies aren't innocent angels in this whole scenario either. When the hospital bill fucks them over they don't even blink twice when they turn around and fuck over the patient to bail themselves out. But make no mistake, insurance is the side effect, the profession itself is the core problem.
> This argument doesn’t make sense to me. Insurance companies are structurally incentivized to minimize payouts across the board. They want hospital bills lower, physician compensation lower, and patient payouts as small as possible. If insurers had unilateral power, total medical spending would collapse, not explode.
They absolutely do not.
They have their profit levels capped at 15% by law and regulation. That means if the insurer wants more absolute dollars of profit, prices must go up.
It also means that if they push prices down they necessarily have less funding to administer those plans, even if the needs are the same (same number of belly buttons, same patient demographics and state of health).
As you note there's also other variables, but this claim: "Insurance companies are structurally incentivized to minimize payouts across the board" is absolutely and categorically not so.
because in america at least, the supply of doctors is kept artificially low. that combined with exploding administrative headcount, means patients are getting pretty terrible, expensive service.
Physician compensation is around 9% of healthcare spending. The number of non-physician providers (NPs, PAs and specialists like physical therapists and podiatrists) has also exploded over the last 20 years. We have far more overall providers per capita than we did 20 years ago.
I don't think physician compensation per se is a good metric for capturing the effect of lack of providers, because some of the increased costs are due to the bottlenecks in the services per se, in terms of procedure costs and types of procedures offered. I also don't think the number of providers per se under the current regime, without deregulation or reregulation of practice boundaries, is representative of what would happen if there were changes in those boundaries. Adding more optometrists 5 years ago isn't the same as changing what they're allowed to do. It also doesn't address what cost increases would have been without an increase in the number of providers.
9% might also seem pretty big to me if it's out of all spending and doesn't include other provider compensation? What if overall healthcare costs went down, but physician compensation stayed the same? Would that then be a problem because it was an increased proportion of the total costs — fat left to be trimmed, so to speak?
There are many problems that don't have anything to do with providers per se, but I also don't think you can glean much by extrapolating to more of the same, especially compensation per se.
Seriously? Spending a night in a hospital results in a $10,000 bill (though the real out of pocket is significantly cheaper. God help you if you have no insurance though). Healthcare in the US is the thing that needs the biggest disruption.
But no business is going to fix it. The market is captured. Only a radical change of insurance laws is going to have any impact. Mandate that insurance must be not for profit. Mandate at least decent minimal coverage standards and large insurance pools that must span age groups and risk groups.
Many hospitals are already non-profit. That doesn't seem to bring down prices. Why would you think that this would work for insurance?
Profit isn't even a big part of the overall revenue.
> Mandate at least decent minimal coverage standards
I assume you want higher coverage standards than what currently exists? Independently of whether that would be the morally right thing to do (or not), it would definitely increase prices.
> and large insurance pools that must span age groups and risk groups.
Why does your insurance need a pool? An actuary can tell you the risk, and you can price according to that. No need for any pooling. Pooling is just something you do, when you don't have good models (or when regulations forces you).
> Why does your insurance need a pool? An actuary can tell you the risk, and you can price according to that. No need for any pooling. Pooling is just something you do, when you don't have good models (or when regulations forces you).
Wuh? The more diverse the pool, the lower the risk. Your way of thinking will very quickly lead to "LiveCheap: the health insurance for fit, healthy under 30s only" for dollars a month, and "SucksToBeYou: the health insurance for the geriatric and chronically disabled" for the low low cost of "everything you have to give".
There's insurance which allows you to convert an uncertain danger into a known payment. And then there's welfare and redistribution.
By all means, please run some means testing and give the poor and sick or disabled extra money. Or even just outright pay their insurance premiums.
But please finance that from general taxation, which is already progressive. Instead of effectively slapping an arbitrary tax on healthy people, whether they be rich or poor. And please don't give rich people extra stealth welfare, just because they are in less than ideal health, either.
Just charge people insurance premiums in line with their expected health outcomes, and help poor people with the premiums using funds from general taxation. (Where poor here means: take their income and make an adjustment for disability etc.)
We _want_ the guy who loses 5kg and gives up smoking to get lower insurance premiums. That's how you set incentives right.
> The more diverse the pool, the lower the risk.
No. The diversification comes from the insurance company running lots of uncorrelated contracts at the same time and having a big balance sheets. For that, it doesn't matter whether it's a pool of similar insurance contracts, or whether they have bets on your insurance contract, and on the price of rice in China, and playing the bookie on some sports outcomes etc. In fact, the more diversified they are, the better (in principle).
But that diversification is completely independent of the pricing of your individual insurance contract.
Have a look at Warren Buffett's 'March Madness' challenge, where he famously challenges people to predict all 67 outcomes of some basketball games to win a billion dollar. Warren Buffet ain't no fool: he doesn't need a pool, he can price the risk of someone winning this one off challenge.
This is going to pretty rapidly devolve into cheap for healthy and insanely expensive for those that aren’t. Genetic propensities will be a lifelong financial burden. Cancer patients will get priced out and die.
These solutions are often proposed as easy fixes but I'm skeptical that they actually will do much to reduce healthcare costs. Healthcare is fundamentally expensive. Not-for-profit hospitals and for-profit hospitals don't really substantively differ in terms of out-of-pocket expenditures for patients; I find it difficult to imagine that forcing insurance companies to be nonprofit would do much to reduce costs.
> large insurance pools that must span age groups and risk groups.
What you describe (community rating) has been tried and it works. But it requires that a lot of young, healthy people enroll, and seniors receive most of the care. In an inverted demographic pyramid like most Western economies have, this is a ticking time bomb, so costs will continue to rise.
> Mandate at least decent minimal coverage standards
I think a better solution is to allow the government to threaten in negotiating prices with companies as Canada does; it greatly reduces rent-seeking behavior by pharmaceutical companies while allowing them to continue earning profits and innovating. (I understand a lot of the complaints against big pharma but they are actually one of the few sectors of the economy that doesn't park their wealth and actually uses it for substantive R&D, despite what the media will tell you, and countless lives have been saved because of pharma company profits)
Essentially the gist of what I'm saying, as someone who has been involved with and studied this industry for the better part of five years, is that it's much more complex than what meets the eye.
There are a lot of not-for-profit insurance companies and they aren't noticably cheaper, though I'm not in HR and they may well be cheaper for the employer.
Disruption, yes, in the sense that the current system needs to be overhauled. But this is a space that's frequented by the SV and VC space and "disruption" has very different connotations, usually in the realm of thought that suggests some SV-brained solution to an existing problem. In some edge cases like Uber/Lyft, this upending of an existing market can yield substantial positive externalities for users. Other "heavy industry" adjacent sectors, not so much. Healthcare and aviation, not so much.
Even SpaceX's vaunted "disruption" is just clever resource allocation; despite their iterative approach to building rockets being truly novel they're not market disruptors in the same way SV usually talks about them. And their approach has some very obvious flaws relative to more traditional companies like BO, which as of now has a lower failure-to-success ratio.
I don't think you'll find many providers clamoring for an AI-assisted app that hallucinates nonexistent diseases, there are plenty of those already out there that draw the ire of many physicians. Where the industry needs to innovate is in the insurance space, which is responsible for the majority of costs, and the captive market and cartel behavior thereof means that this is a policy and government issue, not something that can be solved with rote Silicon Valley style startup-initiated disruption; that I would predict would quickly turn into dysfunction and eventual failure.
Enshittification has done a lot of damage to the concept of "disrupting" markets. It's DOA in risk-averse fields.
The profit in insurance is the volume, not the margin. Disrupting it will not dramatically change outcomes, and will require changes to regulation, not business policy.
Agreed. I'd also argue that there will always be the issue of adverse selection, which in any system that doesn't mandate that all individuals be covered for healthcare regardless of risk profile, will continue to raise costs regardless of whether or not margins are good or bad. That dream died with the individual mandate, and if the nation moves even further away from universal healthcare, we will only see costs rise and not fall as companies shoulder more and more of the relative risk.
Tangent, but some people I know have been downloading their genomes from 23andme and asking Gemini via Antigravity to analyze it. "If you don't die of heart disease by 50, you'll probably live to be 100."
Are you asking for you in particular? It's certainly not accurate in general that anyone that made it to 50 is likely to live to 100.
One I heard was if you make it to 80 you have a 50% chance to make it to 90. If you make it to 90 you have a 50% chance to make it to 95. From 95 to 97.5 again 50% chance. That for the general population, in a 1st world country though, not any individual.
As accurate as our knowledge of genetics, which is not very outside of the identified set of pathological genes associated with hereditary disorders.
Your genome is very complex and we don’t have a model of how every gene interacts with every other and how they’re affected by your environment. Geneticists are working on it, but it’s not here yet.
And remember that 23andMe, Ancestry, and most other services only sequence around 1% of your genome.
Part of genetics is pattern matching, and last time I checked I still can't find a model that can correctly solve hard Sudokus (well, assuming you don't pick a coding model that writes a Sudoku solver.. maybe some of them are trying to do genetics by doing correct algorithms), a trivial job if you write a program that is designed to do it.
Good. I typed in a search for some medication I was taking and Google's "AI" summary was bordering on criminal. The WebMD site had the correct info, as did the manufacturer's website. Google hallucinated a bunch of stuff about it, and I knew then that they needed to put a stop to LLMs slopping about anything to do with health or medical info.
in a way, all overconfident guessing is a better match for the result than hallucination or fabrication would be
"confabulation", though, seems perfect:
“Confabulation is distinguished from lying as there is no intent to deceive and the person is unaware the information is false. Although individuals can present blatantly false information, confabulation can also seem to be coherent, internally consistent, and relatively normal.”
Removing "some" doesn't make it worse. They didn't include "all" AI titles which it would. "Google removes AI health summaries after investigation finds dangerous flaws " is functionally equivalent to "Google removes some of its AI summaries after users’ health put at risk"
Oh, and also, the Ars article itself still contains the word "Some" (on my AB test). It's the headline on HN that left it out. So your complaint is entirely invalid: "Google removes some AI health summaries after investigation finds “dangerous” flaws"
Google is really wrecking its brand with the search AI summaries thing, which is unbelievably bad compared to their Gemini offerings, including the free one. The continued existence of it is baffling.
It's mystifying. A relative showed me a heavily AI-generated video claiming a Tesla wheelchair was coming (self-driving of course, with a sub-$800 price tag). I tried to Google it to quickly debunk and got an AI Overview confidently stating it was a real thing. The source it linked to: that same YouTube video!
Yeah. It's the final nail in the coffin of search, which now actively surfaces incorrect results when it isn't serving ads that usually deliberately pretend to be the site you're looking for. The only thing I use it for any more is to find a site I know exists but I don't know the URL of.
The AI summaries clearly aren’t bad. I’m not sure what kind of weird shit you search for that you consider the summaries bad. I find them helpful and click through to the cited sources.
But only for some highly specific searches, when what it should be doing is checking if it's any sort of medical query and keeping the hell out of it because it can't guarantee reliability.
It's still baffling to me that the world's biggest search company has gone all-in on putting a known-unreliable summary at the top of its results.
Google for "malay people acne" or other acne-related queries. It will readily spit out the dumbest pseudo science you can find. The AI bot finds a lot of dumb shit on the internet which it serves back to you on the Google page. You can also ask it about the Kangen MLM water scam. Why do athletes drink Kangen water? "Improved Recovery Time" Sure buddy.
Going offtopic: The "health benefits of circumcision" bogus has existed for decades. The search engines are returning the results of bogus information, because the topic is mostly relevant for its social and religious implications.
I am related with the topic, and discussion is similar to topics on politics: Most people don't care and will stay quiet while a very aggresive group will sell it as a panacea.
the problem isn't that search engines are polluted; that's well known. The problem is that people perceive these AI responses as something greater than a search query; they view it as an objective view point that was reasoned out by some sound logical method -- and anyone that understands the operation of LLMs knows that they don't really do that, except for some very specific edge examples.
If an app makes a diagnosis or a recommendation based on health data, that's Software as a Medical Device (SaMD) and it opens up a world of liability.
https://www.fda.gov/medical-devices/digital-health-center-ex...
How do you suggest to deal with Gemini? Extremely useful to understand whether something is worrying or not. Whether we like it or not, it’s a main participant to the discussion.
Apparently we should hire the Guardian to evaluate LLM output accuracy?
Why are these products being put out there for these kinds of things with no attempt to quantify accuracy?
In many areas AI has become this toy that we use because it looks real enough.
It sometimes works for some things in math and science because we test its output, but overall you don't go to Gemini and it says "there's a 80% chance this is correct". At least then you could evaluate that claim.
There's a kind of task LLMs aren't well suited to because there's no intrinsic empirical verifiability, for lack of a better way of putting it.
> How do you suggest to deal with Gemini?
Don't. I do not ask my mechanic for medical advice, why would I ask a random output machine?
This "random output machine" is already in large use in medicine so why exactly not? Should I trust the young doctor fresh out of the Uni more by default or should I take advises from both of them with a grain of salt? I had failures and successes with both of them but lately I found Gemini to be extremely good at what it does.
There's a difference between a doctor (an expert in their field) using AI (specialising in medicine) and you (a lay person) using it to diagnose and treat yourself. In the US, it takes at least 10 years of studying (and interning) to become a doctor.
Ideally, hold Google liable until their AI doesn’t confabulate medical advice.
Realistically, sign a EULA waiving your rights because their AI confabulates medical advice
> How do you suggest to deal with Gemini?
With robust fines based on % revenue whenever it breaks the law, would be my preference. I'm nit here to attempt solutions to Google's self-inflicted business-model challenges.
Not surprised. Another example is minecraft related queries. Im searching with the intention of eventually going to a certain wiki page at minecraft.wiki, but started to just read the summaries instead. It will combine fan forums discussing desired features/ideas with the actual game bible at minecraft.wiki - so it mixes one source of truth with one source of fantasy. Results in ridiculous inaccurate summaries.
A few months ago in a comment here on HN I speculated about the reason an old law might have been written the way it was, instead of more generally. If it had been written without the seemingly arbitrary restrictions it included there would have been no need for the new law that the thread was about.
A couple hours later I decided to ask an LLM if it could tell me. It quickly answered, giving the same reason that I had guessed in my HN comment.
I then clicked the two links it cited as sources. One was completely irrelevant. The other was a link to my HN comment.
I had a similar thing happen to me just today. A friend of mine had finished a book in a series. I have read the series but it was almost 10 years ago, and I needed a refresher with spoilers, so I went looking.
Well, some redditor had posted a comparison of a much later book in the series, and drawn all sorts of parallels and foreshadowing and references between this quite early book I was looking for and the much later one. It was an interesting post so it had been very popular.
The AI summary completely confused the two books because of this single reddit post, so the summary I got was hopelessly poisoned with plot points and characters that wouldn't show up until nearly the conclusion. It simply couldn't tell which book was which. It wasn't quite as ridiculous as having, say, Anakin Skywalker face Kylo Ren in a lightsaber duel, but it was definitely along those same lines of confusion.
Fortunately, I finished the later book recently enough to remember it, but it was like reading a fever dream.
It's a common problem.
At some point in time when asked how many Kurdish people live in Poland, Google's AI would say that several million, which was true, but only in a fantasy world conjured by a certain LARP group, who put a wiki on fandom.com.
Yeah, it happened recently for a kubernetes resource. I was searching for how to do something, and Google AI helpfully showed me a kubernetes resource that was exactly what I needed, and was designed to work exactly how I needed it.
Sadly, the resource didn't actually exist. It would have been perfect if it did, though!
I find its tricky with games, especially ones as updated as frequently as Minecraft over the years. I've had some of this trouble with OSRS. It brings in old info, or info from a League/Event that isn't relevant. Easier to just go to the insanely curated wiki.
What's interesting to me is that this kind of behavior -- slightly-buffleheaded synthesis of very large areas of discourse with widely varying levels of reliability/trustworthiness -- is actually sort of one of the best things about AI research, at least for me?
I'm pretty good at reading the original sources. But what I don't have in a lot of cases is a gut that tells me what's available. I'll search for some vague idea (like, "someone must have done this before") with the wrong jargon and unclear explanation. And the AI will... sort of figure it out and point me at a bunch of people talking about exactly the idea I just had.
Now, sometimes they're loons and the idea is wrong, but the search will tell me who the players are, what jargon they're using to talk about it, what the relevant controversies around the ideas are, etc... And I can take it from there. But without the AI it's actually a long road between "I bet this exists" and "Here's someone who did it right already".
Yeah, this is by far the biggest value I've gotten from LLMs - just pointing me to the area of literature neither me nor any of my friends have heard of, but which have spent a decade running about the problems we're running into.
In this case, all that matters is that the outputs aren't complete hallucination. Once you know the magic jargon, everything opens up easily with traditional search.
I run a small business that buys from one of two suppliers of the items we need. The supplier has a TRASH website search feature. It's quicker to Google it.
Now that AI summaries exist, I have to scroll past half a page of result and nonsense about a Turkish oil company before I find the item I'm looking for.
I hate it. It's such a minor inconvenience, but it's just so annoying. Like a sore tooth.
You can cure Google's AI (but not bad results) with &udm=14
https://google.com/search?q=parkas&udm=14
Or you can take the alternative approach, where Microsoft's own "Merl" support agent says it knows anything to do with Minecraft, and then replies to basically any gameplay question with "I don't know that".
"Dangerous and Alarming" - it tough; healthcare is needs disruption but unlike many places to target for disruption, the risk is life and death. It strikes me that healthcare is a space to focus on human in the loop applications and massively increasing the productivity of humans, before replacing them... https://deadstack.net/cluster/google-removes-ai-overviews-fo...
Why does healthcare "need disruption"?
Many people don’t get it, it’s really expensive, even in countries with non broken healthcare systems (not the us) costs increase rapidly and no one is sure how the systems will remain solvent with the same level of care given today. The way things are currently done are entrenched but not sustainable, that’s when disruptions are apt to appear.
I mean, if we're talking Christensenian disruption, that happens in neglected markets rather than currently dysfunctional ones. There's no shortage of actors wronging money out of health care so there's not a disruptable space per se.
> wronging
Not sure if this is a typo (of wringing) or a pun, but it's apt either way.
Society pretends that human doctors are better than they really are, and AI is worse than it really is.
It's the self-driving cars debate all over again.
It's inefficient and not living to its potential
And "disruption" (a pretty ill-defined term) is the solution to that?
The solution is single payer. Any attempt to solve this with technological band aids is completely futile. We know what the solution is because we see it work in every other developed nation. We don't have it because a class of billionaire doners doesn't want to pay into the system that allowed them to become fabulously wealthy. People who are claiming AI is the solution to healthcare access and affordability are delusional or lying to you.
There are good reasons to think single payer systems are not the answer. The numerous documented inefficiencies and inconveniences they suffer from don't need repeating here.
And many single payer systems around the world only appear to work as well as they do because the US effectively subsidizes medical costs through its own out of control prices.
Maybe for R&D but your outsized costs are also due to your liability environment and direct incentives against preventative healthcare.
We'd rather have medical bankruptcy than a functional system.
That's because it's incredibly corrupted, not because it needs disruption. Unless the disruption comes in the form of jail time.
The inefficiency is the buying of yachts for billionaires.
IDK, the owners of retail clothing chains buy yachts and yet that sector is jaw-droppingly efficient at delivering clothes to people. Executives can be annoying tools but I don't think their pay is the problem.
Compare: Google's founders can buy all the yachts they could possibly eat, yet Google Searches are offered for free.
If we could get healthcare to that level, it would be great.
For a less extreme example: Wal-Mart and Amazon have made plenty of people very rich, and they charge customers for their goods; but their entrance into the markets have arguable brought down prices.
> Google's founders can buy all the yachts they could possibly eat, yet Google Searches are offered for free.
Google searches cost many billions of dollars: your confusion is because the customer isn’t the person searching but the advertisers paying to influence them. Healthcare can’t work like that not just because the real costs are both much higher and resistant to economies of scale but, critically, there aren’t people with deep pockets lining up to pay for you to be healthy. That’s why every other developed country sees better results for less money: keeping people healthy is a social good, and political forces work for that better than raw economic incentives.
Wal-Mart and Amazon have reduced wages for employees and the quality of purchased goods more than they have improved prices for consumers.
How do we know that?
And why do customers come back to shop there?
We know that from observing evidence such as how much the government pays out in welfare to Wal-Mart employees.
Customers continue shopping there because human beings are typically incapable of accepting a short-term loss (higher price) for a long-term gain (product lasts more than three uses).
And Google search, a service on the level of a public utility, has been degrading noticeably for years in the face of shareholders demanding more and more returns.
How is Google Search a public utility?
Comparing something to a public utility is not me saying it's literally a public utility. Google runs a monopolistic service that is essential to a lot of our public life, in a segment that has high cost of entry and infrastructure cost. They make the service worse to make more money. It should be a regulated utility like electricity or railroads, we should have a public alternative like the post office is to UPS, or it should be nationalized. The situation gets more dire when you consider their browser monopoly.
It's inefficient and not living to its potential
Yeah, because we saw what a great job the tech bros did making government more efficient.
Because insurance companies incentivize upward price momentum. The ones who innovate and bring the prices down are not rewarded for their efforts. Health inflation is higher than headline inflation because of this absence of price pressure
I sympthatise with the argument. We should test it against real world data.
Eg your argument would predict that healthcare price inflation is not as bad in areas with less insurance coverage. Eg for dental work (which is less often covered as far as I can tell), for (vanity) plastic surgery, or we can even check healthcare price inflation for vet care for pets.
Dental and vanity surgeries aren't happening in a vacuum. There are baseline costs eg. anesthesia, recovery medications, medical machinery etc which are all bloated due to the rest of industry not being under price pressure (rising tide lift all boats)
It's similar to how AI data center buildout race is raising the prices for consumer electronics in 2026 and beyond. The suppliers have no incentive to sell lower cost products to tiny niche
I just looked it up, and apparently health care costs for pets has gone up in price even faster than for humans.
Pets typically don't have medical insurance, and any insurance that does exist there has a radically different regulatory regime than for humans.
Since 1980 for the US:
CPI has gone up by 3.16% on average per year (x4.17 in total). Human healthcare costs by 4.9% per year (x8.96 in total). And pet healthcare costs by 6.49% (or x17.87 in total).
I suspect this has something to do with it: https://www.theatlantic.com/ideas/archive/2024/04/vet-privat...
This argument doesn’t make sense to me. Insurance companies are structurally incentivized to minimize payouts across the board. They want hospital bills lower, physician compensation lower, and patient payouts as small as possible. If insurers had unilateral power, total medical spending would collapse, not explode.
The real source of high medical costs is the entity that sets the hospital bill in the first place.
The explanation is much simpler than people want to admit, but emotionally uncomfortable: doctors and hospitals are paid more than the free market would otherwise justify. We hesitate to say this because they save lives, and we instinctively conflate moral worth with economic compensation. But markets don’t work that way.
Economics does not reward people based on what they “deserve.” It rewards scarcity. And physician labor is artificially scarce.
The supply of doctors is deliberately constrained. We are not operating in a free market here. Entry into the profession is made far more restrictive than is strictly necessary, not purely for safety, but to protect incumbents. This is classic supply-side restriction behavior, bordering on cartel dynamics.
See, for example: https://petrieflom.law.harvard.edu/2022/03/15/ama-scope-of-p...
We see similar behavior in law, but medicine is more insidious. Because medical practice genuinely requires guardrails to prevent harm and quackery, credentialing is non-negotiable. That necessity makes it uniquely easy to smuggle in protectionism under the banner of “safety.”
The result is predictable: restricted supply, elevated wages, and persistently high medical costs. The problem isn’t mysterious, and it isn’t insurance companies. It’s a supply bottleneck created and defended by the profession itself.
Insurance companies aren't innocent angels in this whole scenario either. When the hospital bill fucks them over they don't even blink twice when they turn around and fuck over the patient to bail themselves out. But make no mistake, insurance is the side effect, the profession itself is the core problem.
> This argument doesn’t make sense to me. Insurance companies are structurally incentivized to minimize payouts across the board. They want hospital bills lower, physician compensation lower, and patient payouts as small as possible. If insurers had unilateral power, total medical spending would collapse, not explode.
They absolutely do not.
They have their profit levels capped at 15% by law and regulation. That means if the insurer wants more absolute dollars of profit, prices must go up.
It also means that if they push prices down they necessarily have less funding to administer those plans, even if the needs are the same (same number of belly buttons, same patient demographics and state of health).
As you note there's also other variables, but this claim: "Insurance companies are structurally incentivized to minimize payouts across the board" is absolutely and categorically not so.
because in america at least, the supply of doctors is kept artificially low. that combined with exploding administrative headcount, means patients are getting pretty terrible, expensive service.
Physician compensation is around 9% of healthcare spending. The number of non-physician providers (NPs, PAs and specialists like physical therapists and podiatrists) has also exploded over the last 20 years. We have far more overall providers per capita than we did 20 years ago.
Lack of providers isn’t what’s driving up costs.
I don't think physician compensation per se is a good metric for capturing the effect of lack of providers, because some of the increased costs are due to the bottlenecks in the services per se, in terms of procedure costs and types of procedures offered. I also don't think the number of providers per se under the current regime, without deregulation or reregulation of practice boundaries, is representative of what would happen if there were changes in those boundaries. Adding more optometrists 5 years ago isn't the same as changing what they're allowed to do. It also doesn't address what cost increases would have been without an increase in the number of providers.
9% might also seem pretty big to me if it's out of all spending and doesn't include other provider compensation? What if overall healthcare costs went down, but physician compensation stayed the same? Would that then be a problem because it was an increased proportion of the total costs — fat left to be trimmed, so to speak?
There are many problems that don't have anything to do with providers per se, but I also don't think you can glean much by extrapolating to more of the same, especially compensation per se.
Seriously? Spending a night in a hospital results in a $10,000 bill (though the real out of pocket is significantly cheaper. God help you if you have no insurance though). Healthcare in the US is the thing that needs the biggest disruption.
But no business is going to fix it. The market is captured. Only a radical change of insurance laws is going to have any impact. Mandate that insurance must be not for profit. Mandate at least decent minimal coverage standards and large insurance pools that must span age groups and risk groups.
Many hospitals are already non-profit. That doesn't seem to bring down prices. Why would you think that this would work for insurance?
Profit isn't even a big part of the overall revenue.
> Mandate at least decent minimal coverage standards
I assume you want higher coverage standards than what currently exists? Independently of whether that would be the morally right thing to do (or not), it would definitely increase prices.
> and large insurance pools that must span age groups and risk groups.
Why does your insurance need a pool? An actuary can tell you the risk, and you can price according to that. No need for any pooling. Pooling is just something you do, when you don't have good models (or when regulations forces you).
> Why does your insurance need a pool? An actuary can tell you the risk, and you can price according to that. No need for any pooling. Pooling is just something you do, when you don't have good models (or when regulations forces you).
Wuh? The more diverse the pool, the lower the risk. Your way of thinking will very quickly lead to "LiveCheap: the health insurance for fit, healthy under 30s only" for dollars a month, and "SucksToBeYou: the health insurance for the geriatric and chronically disabled" for the low low cost of "everything you have to give".
You are mixing things up.
There's insurance which allows you to convert an uncertain danger into a known payment. And then there's welfare and redistribution.
By all means, please run some means testing and give the poor and sick or disabled extra money. Or even just outright pay their insurance premiums.
But please finance that from general taxation, which is already progressive. Instead of effectively slapping an arbitrary tax on healthy people, whether they be rich or poor. And please don't give rich people extra stealth welfare, just because they are in less than ideal health, either.
Just charge people insurance premiums in line with their expected health outcomes, and help poor people with the premiums using funds from general taxation. (Where poor here means: take their income and make an adjustment for disability etc.)
We _want_ the guy who loses 5kg and gives up smoking to get lower insurance premiums. That's how you set incentives right.
> The more diverse the pool, the lower the risk.
No. The diversification comes from the insurance company running lots of uncorrelated contracts at the same time and having a big balance sheets. For that, it doesn't matter whether it's a pool of similar insurance contracts, or whether they have bets on your insurance contract, and on the price of rice in China, and playing the bookie on some sports outcomes etc. In fact, the more diversified they are, the better (in principle).
But that diversification is completely independent of the pricing of your individual insurance contract.
Have a look at Warren Buffett's 'March Madness' challenge, where he famously challenges people to predict all 67 outcomes of some basketball games to win a billion dollar. Warren Buffet ain't no fool: he doesn't need a pool, he can price the risk of someone winning this one off challenge.
More generally, have a look at Prize indemnity insurance https://en.wikipedia.org/wiki/Prize_indemnity_insurance which helps insure many one-off events.
This is going to pretty rapidly devolve into cheap for healthy and insanely expensive for those that aren’t. Genetic propensities will be a lifelong financial burden. Cancer patients will get priced out and die.
These solutions are often proposed as easy fixes but I'm skeptical that they actually will do much to reduce healthcare costs. Healthcare is fundamentally expensive. Not-for-profit hospitals and for-profit hospitals don't really substantively differ in terms of out-of-pocket expenditures for patients; I find it difficult to imagine that forcing insurance companies to be nonprofit would do much to reduce costs.
> large insurance pools that must span age groups and risk groups.
What you describe (community rating) has been tried and it works. But it requires that a lot of young, healthy people enroll, and seniors receive most of the care. In an inverted demographic pyramid like most Western economies have, this is a ticking time bomb, so costs will continue to rise.
> Mandate at least decent minimal coverage standards
I think a better solution is to allow the government to threaten in negotiating prices with companies as Canada does; it greatly reduces rent-seeking behavior by pharmaceutical companies while allowing them to continue earning profits and innovating. (I understand a lot of the complaints against big pharma but they are actually one of the few sectors of the economy that doesn't park their wealth and actually uses it for substantive R&D, despite what the media will tell you, and countless lives have been saved because of pharma company profits)
Essentially the gist of what I'm saying, as someone who has been involved with and studied this industry for the better part of five years, is that it's much more complex than what meets the eye.
There are a lot of not-for-profit insurance companies and they aren't noticably cheaper, though I'm not in HR and they may well be cheaper for the employer.
My fiance was in hospital recently for a fairly common disease. She arrived at 2200 Wednesday night and was discharged 1000 Saturday morning.
Her bill before "insurance negotiated prices" was $59,000. Effectively $1,000/hr, 24/7.
Disruption, yes, in the sense that the current system needs to be overhauled. But this is a space that's frequented by the SV and VC space and "disruption" has very different connotations, usually in the realm of thought that suggests some SV-brained solution to an existing problem. In some edge cases like Uber/Lyft, this upending of an existing market can yield substantial positive externalities for users. Other "heavy industry" adjacent sectors, not so much. Healthcare and aviation, not so much.
Even SpaceX's vaunted "disruption" is just clever resource allocation; despite their iterative approach to building rockets being truly novel they're not market disruptors in the same way SV usually talks about them. And their approach has some very obvious flaws relative to more traditional companies like BO, which as of now has a lower failure-to-success ratio.
I don't think you'll find many providers clamoring for an AI-assisted app that hallucinates nonexistent diseases, there are plenty of those already out there that draw the ire of many physicians. Where the industry needs to innovate is in the insurance space, which is responsible for the majority of costs, and the captive market and cartel behavior thereof means that this is a policy and government issue, not something that can be solved with rote Silicon Valley style startup-initiated disruption; that I would predict would quickly turn into dysfunction and eventual failure.
Enshittification has done a lot of damage to the concept of "disrupting" markets. It's DOA in risk-averse fields.
The part that needs disrupting is the billionaires who own insurance companies and demand profit from people's health .
The profit in insurance is the volume, not the margin. Disrupting it will not dramatically change outcomes, and will require changes to regulation, not business policy.
Agreed. I'd also argue that there will always be the issue of adverse selection, which in any system that doesn't mandate that all individuals be covered for healthcare regardless of risk profile, will continue to raise costs regardless of whether or not margins are good or bad. That dream died with the individual mandate, and if the nation moves even further away from universal healthcare, we will only see costs rise and not fall as companies shoulder more and more of the relative risk.
Profit is a small part of overall revenue.
Tangent, but some people I know have been downloading their genomes from 23andme and asking Gemini via Antigravity to analyze it. "If you don't die of heart disease by 50, you'll probably live to be 100."
I wonder how accurate it is.
Are you asking for you in particular? It's certainly not accurate in general that anyone that made it to 50 is likely to live to 100.
One I heard was if you make it to 80 you have a 50% chance to make it to 90. If you make it to 90 you have a 50% chance to make it to 95. From 95 to 97.5 again 50% chance. That for the general population, in a 1st world country though, not any individual.
As accurate as our knowledge of genetics, which is not very outside of the identified set of pathological genes associated with hereditary disorders.
Your genome is very complex and we don’t have a model of how every gene interacts with every other and how they’re affected by your environment. Geneticists are working on it, but it’s not here yet.
And remember that 23andMe, Ancestry, and most other services only sequence around 1% of your genome.
I'd guess it's much less accurate than that.
Part of genetics is pattern matching, and last time I checked I still can't find a model that can correctly solve hard Sudokus (well, assuming you don't pick a coding model that writes a Sudoku solver.. maybe some of them are trying to do genetics by doing correct algorithms), a trivial job if you write a program that is designed to do it.
> Google … constantly measures and reviews the quality of its summaries across many different categories of information, it added.
Notice how little this sentence says about whether anything is any good.
This incessant, unchecked[1] peddling is what rids "AI" of the good name it could earn for the things it's good at.
But Alas, infinite growth or nothing is the name of the game now.
[1] Well, not entirely thanks to people investigating.
The fact that it reached this point is further evidence that if the AI apocalypse is a possibility, common sense will not save us.
... at the same time, OpenAI launches their ChatGPT Health service: https://openai.com/index/introducing-chatgpt-health/, marketed as "a dedicated experience in ChatGPT designed for health and wellness."
So interesting to see the vastly different approaches to AI safety from all the frontier labs.
Why vastly different?
Aren't they both searching various online sources for relevant information and feeding that into the LLM?
Good. I typed in a search for some medication I was taking and Google's "AI" summary was bordering on criminal. The WebMD site had the correct info, as did the manufacturer's website. Google hallucinated a bunch of stuff about it, and I knew then that they needed to put a stop to LLMs slopping about anything to do with health or medical info.
s/hallucinated/fabricated/, please.
arguably: incorrectly guessed*
in a way, all overconfident guessing is a better match for the result than hallucination or fabrication would be
"confabulation", though, seems perfect:
“Confabulation is distinguished from lying as there is no intent to deceive and the person is unaware the information is false. Although individuals can present blatantly false information, confabulation can also seem to be coherent, internally consistent, and relatively normal.”
https://en.wikipedia.org/wiki/Confabulation
* insofar as “guess” conveys an attempt to be probably in the zone
Ars rips of this original reporting, but makes it worse by leaving out the word "some" from the title.
‘Dangerous and alarming’: Google removes some of its AI summaries after users’ health put at risk: https://www.theguardian.com/technology/2026/jan/11/google-ai...
Removing "some" doesn't make it worse. They didn't include "all" AI titles which it would. "Google removes AI health summaries after investigation finds dangerous flaws " is functionally equivalent to "Google removes some of its AI summaries after users’ health put at risk"
Oh, and also, the Ars article itself still contains the word "Some" (on my AB test). It's the headline on HN that left it out. So your complaint is entirely invalid: "Google removes some AI health summaries after investigation finds “dangerous” flaws"
How could they even offer that without a Medical Device license? where is the FDA when it comes to enforcement?
Being gutted by DOGE and the Trump Administration under RFK Jr.
Google is really wrecking its brand with the search AI summaries thing, which is unbelievably bad compared to their Gemini offerings, including the free one. The continued existence of it is baffling.
It's mystifying. A relative showed me a heavily AI-generated video claiming a Tesla wheelchair was coming (self-driving of course, with a sub-$800 price tag). I tried to Google it to quickly debunk and got an AI Overview confidently stating it was a real thing. The source it linked to: that same YouTube video!
Yeah. It's the final nail in the coffin of search, which now actively surfaces incorrect results when it isn't serving ads that usually deliberately pretend to be the site you're looking for. The only thing I use it for any more is to find a site I know exists but I don't know the URL of.
What do you use instead… that doesn’t piggyback off of google search?
The AI summaries clearly aren’t bad. I’m not sure what kind of weird shit you search for that you consider the summaries bad. I find them helpful and click through to the cited sources.
...and the cited source is AI generated video(s). There are summaries that say exactly the opposite of the correct result.
But only for some highly specific searches, when what it should be doing is checking if it's any sort of medical query and keeping the hell out of it because it can't guarantee reliability.
It's still baffling to me that the world's biggest search company has gone all-in on putting a known-unreliable summary at the top of its results.
They AI summary is total garbage. Probably most broken feature I saw being released in a while.
huh.. so google doesn't trust it's own product.. but openai and anthropic are happy to lie? lol
Google for "malay people acne" or other acne-related queries. It will readily spit out the dumbest pseudo science you can find. The AI bot finds a lot of dumb shit on the internet which it serves back to you on the Google page. You can also ask it about the Kangen MLM water scam. Why do athletes drink Kangen water? "Improved Recovery Time" Sure buddy.
Also try "health benefits of circumcision"...
I agree with your point.
Going offtopic: The "health benefits of circumcision" bogus has existed for decades. The search engines are returning the results of bogus information, because the topic is mostly relevant for its social and religious implications.
I am related with the topic, and discussion is similar to topics on politics: Most people don't care and will stay quiet while a very aggresive group will sell it as a panacea.
the problem isn't that search engines are polluted; that's well known. The problem is that people perceive these AI responses as something greater than a search query; they view it as an objective view point that was reasoned out by some sound logical method -- and anyone that understands the operation of LLMs knows that they don't really do that, except for some very specific edge examples.
chatGPT told me, I am the healthiest guy in the world, and I believe it
If it's any help women appreciate confidence; Yet, the article is not about chatgpt