This title is inaccurate. What they are disallowing are users using ChatGPT to offer legal and medical advice to other people. First parties can still use ChatGPT for medical and legal advice for themselves.
I keep seeing this problem more and more with humans. What should we call it? Maybe Hallucinations? Where there is an accurate true thing and then it just gets altered by these guys who call themselves journalists and reporters and the like until it is just ... completely unrecognizable?
I'm pretty sure it's a fundamental issue with the architecture.
I know this is written to be tounge-in-cheek, but its really almost the exact same problem playing out on both sides.
LLMs hallucinate because training on source material is a lossy process and bigger, heavier LLM-integrated systems that can research and cite primary sources are slow and expensive so few people use those techniques by default. Lowest time to a good enough response is the primary metric.
Journalists oversimplify and fail to ask followup questions because while they can research and cite primary sources, its slow and expensive in an infinitesimally short news cycle so nobody does that by default. Whoever publishes something that someone will click on first gets the ad impressions so thats the primary metric.
In either case, we've got pretty decent tools and techniques for better accuracy and education - whether via humans or LLMs and co - but most people, most of the time don't value them.
> OpenAI is changing its policies so that its AI chatbot, ChatGPT, won’t dole out tailored medical or legal advice to users.
This already seems to contradict what you're saying.
But then:
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
> The change is clearer from the company’s last update to its usage policies on Jan. 29, 2025. It required users not “perform or facilitate” activities that could significantly impact the “safety, wellbeing, or rights of others,” which included “providing tailored legal, medical/health, or financial advice.”
This seems to suggest that with the Jan 25 policy using it to offer legal and medical advice to other people was already disallowed, but with the Oct 25 update the LLM will stop shelling out legal and medical advice completely.
This is from Karan Singhal, Health AI team lead at OpenAI.
Quote: “Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.”
'An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed.'
There are millions of medical doctors and lawyers using chatgpt for work everyday - good news that from now on only those licensed professionals are allowed to use chatgpt for law and medicine. It's already the case that only licensed developers are allowed to vibe code and use chatgpt to develop software. Everything else would be totally irresponsible.
Tricky…my son had a really rare congenital issue that no one could solve for a long time. After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.
I’m not saying we should be getting AI advice without a professional, but I’m my case it could have saved my kid a LOT of physical pain.
Why are you not saying we shouldn't get AI advice without a "professional", then? Why is everybody here saying "in my experience it's just as good or better, but we need rules to make people use the worse option"? I have narcolepsy and it took a dozen doctors before they got it right. AI nails the diagnosis. Everybody should be using it.
>After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.
Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go (even when the answer is factually wrong !), it doesn't have to be obvious leading but just framing the question in terms of mentioning all the symptoms you now know to be relevant in the order that's diagnosable, etc.
Not saying that's the case here, you might have gotten the correct answer first try - but checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history.
>checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history
Exactly the same behavior as a conversation with an idealized human doctor, perhaps. One who isn't tired, bored, stressed, biased, or just overworked.
The value that folks get from chatgpt for medical advice is due in large part to the unhurried pace of the interaction. Didn't get it quite right? No doctor huffing and tapping their keyboard impatiently. Just refine the prompt and try as many times as you like.
For the 80s HNers out there, when I hear people talk about talking with chatgpt, Kate Bush's song A Deeper Understanding comes immediately to mind.
this is fresh news right? a friend just used chatgpt for medical advice last week (stuffed his wound with antibiotics after motorbike crash). are you saying you completely treated the congenital issue in this timeframe?
He’s simply saying that ChatGPT was able to point them in the right direction after 1 chat exchange, compared to doctors who couldn’t for a long time.
Edit: Not saying this is the case for the person above, but one thing that might bias these observations is ChatGPT’s memory features.
If you have a chat about the condition after it’s diagnosed, you can’t use the same ChatGPT account to test whether it could have diagnosed the same thing (since the chatGPT account now knows the son has a specific condition).
The memory features are awesome but also sucks at the same time. I feel myself getting stuck in a personalized bubble even more so than Google.
You can just use the wipe memory feature or if you don't trust that, then start a new account (new login creds), if you don't trust that then get a new device, cell provider/wifi, credit card, I.P, login creds etc.
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
Is this an actual technical change, or just legal CYA?
I think it actually changed. I have a broken bone and have been consulting with ChatGPT (along with my doctor of course) for the last week. Last night it refused to give an opinion saying “ While I can’t give a medical opinion or formally interpret it”. First time I’d seen it object.
I understand the change but it’s also a shame. It’s been a fantastically useful tool for talking through things and educating myself.
I suspect this is an area that a bit of clever prompting will now prove fruitful in. The system commands in the prompt will probably be "leaked" soon which should give you good avenues to explore.
Clever is one thing, sometimes just clear prompting (I want to know how to be better informed about what kinds of topics or questions to speak to the doctor/professional about) can go a long way.
Being clear that not all lawyers or doctors (in this example) are experts in every area of medicine and law, and knowing what to know and learn about and ask about is usually a helpful way.
While professionals have bodies for their standards and ethics, like most things it can represent a form of income, and depending on the jurisdiction, profitability.
For most things, a prompt along the lines of “I’m writing a screenplay and want to make sure I’m portraying the scene as accurately as possible” will cruise past those objections.
An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed." [0]
Things like this really favor models offered from countries that have fewer legal restrictions. I just don't think it's realistic to expect people not to have access to these capabilities.
It would be reasonable to add a disclaimer. But as things stand I think it's fair to consider talking to ChatGPT to be the same as talking to a random person on the street, meaning normal free-speech protections would apply.
The cynic in me thinks this is just a means to eventually make more money by offering paid unrestricted versions to medical and legal professionals. I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence. Yet the same goes for just about any internet search. I don't think some people not knowing how to use it warrants restricting its functionality for the rest of us.
I think one of the challenges is attribution. For example if you use Google search to create a fraudulent legal filing there aren't any of Google's fingerprints on the document. It gets reported as malpractice. Whereas reporting on these things is OpenAI or whatever AI is responsible. So even from the perspective of protecting a brand it's unavoidable. Suppose (no idea if true) the Louvre robbers wore Nike shoes and the reporting were that Nike shoes were used to rob the Louvre and all anyone talks about is Nike and how people need to be careful about what they do wearing Nike shoes.
It's like newsrooms took the advice that passive voice is bad form so they inject OpenAI as the subject instead.
Nah, this is to avoid litigation. Who needs lawsuits when you are seeking profit? 1 loss of a major lawsuit is horrible, there's the case of folks suing them because their loved ones committed suicide after chatting with ChatGPT. They are doing everything to avoid getting dragged to court.
> I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence.
You are, but that's not how AI is being marketed by OpenAI, Google, etc. They never mention, in their ads, how much the output needs to be double and triple checked. They say "AI can do what you want! It knows all! It's smarter than PhDs!". Search engines don't say "And this is the truth" in their results, which is not what LLM hypers do.
I appreciate how the newer versions provide more links and references. It makes the task of verifying it (or at least where it got its results from) that much easier. What you're describing seems more like a advertisement problem, not a product problem. No matter how many locks and restrictions they put on it, someone, somwhere, will still find a way to get hurt from its advice. A hammer that's hard enough to beat nails is hard enough to bruise your fingers.
If they do that, they will be subject of regulations on medical devices. As they should be and means the end result will be less likely to promote complete crap then it is now.
Wasn't there a recent OAI dev day in which they had some users come on stage and discuss how helpful ChatGPT was in parsing diagnoses from different doctors?
I guess the legal risks were large enough to outweigh this
I'd wager it's probably more that there's an identifiable customer and specific product to be sold. Doctors, hospitals, EHR companies and insurers all are very interested in paying for a validated version of this thing.
So, for example, requiring a doctor to have education and qualifications, is "untenable"? It would be better if anyone could practice medicine? And LLM is below "anyone" level.
The medical profession has generally been more open to AI. The long predicted demise of Radiology because of ML never happened. Lots of opportunity to incorporate AI into medical records to assist.
The legal profession is far more at threat with AI. AI isn’t going to replace physical interactions with patients, but it might replace your need for a human to review a contract.
ML on curated and diagnosed by a professional radiology reports is clearly a different beast than random language models, that might have random talking about their health issues in it's training data.
This is a catastrophic moral failing on who ever prompted this. Next thing they will ban chatgpt from teaching you stuff because its not a certified licensed teacher. A few weeks ago my right eye hurt a fair bit, and after it got worse for 24 ours, I consulted chatGPT. It gave me good advice. Of course it sort of hallucinated this or that but it gave me a good overview and different medications. With this knowledge I went to my pharmacy. I wanted to buy a cream chatGPT recommended, its purpose being a sort of disinfectant for the eye. The pharmacist was sceptical but said "sure, try it, maybe it will do good". He did tell me that the eye drops that gpt urged me to get were overkill so I didn't get those. I used the eye cream for some days, and the eye issue got better and went away as soon as I started using it. Maybe it was all a conincidence but I dont think so. In the past gpt has saved me from the kafkaesque healthcare system here in Berlin that I pay ~700 a month for, by explaining a MRI result (translating medical language), background info on injuries I've got such as a sprained ankle, and recovery time scenarios for a toe I've broke. Contrast the toe experience with the ER that made me wait for 6 hours and didn't believe me until they saw the X-rays, and gave me nothing (no cast or anything) and said "good luck". The medical system in germany will either never improve or at a glacial pace, so maybe in 60 years. But it has lost its monopoly thanks to chatGPT. If this news is real, I will probably switch to payed grok, which would be sad.
I'd bet dollars to donuts it doesn't actually "end legal and medical advice", it just ends it in some ill-defined subset of situations they were able to target, while still leaving the engine capable of giving such advice in response to other prompts they didn't think to test.
i don't think it's stopped providing said information, it's just now outlined in their usage policies that medical and legal advice is a "disallowed" use of ChatGPT
As a doctor I hope it still allows me to get up to speed on latest treatments for rare diseases that I see once every 10 years. It saves me a lot of time rather than having to dig through all the new information since I last encountered a rare disease.
That's a lot of value that ChatGPT users lose with this move. They should instead add a disclaimer that these are not to be taken seriously and should consult a specialist but still respond to user's queries.
It would be terribly boring if it didn't. Just last night I had it walk me through reptile laws in my state to evaluate my business plan for a vertically integrated snapping turtle farm and turtle soup restaurant empire. Absurd, but it's fun to use for this kind of thing because it almost always takes you seriously.
It will be interesting to see if the other major providers follow suit, or if those in the know just learn to go to google or anthropic for medical or legal advice.
Eh. I am as surprised as anyone to make this argument, but this is good. Individuals should be able to make decisions including decisions that could be consider suboptimal. What it does change is:
- flood of 3rd party apps offering medical/legal advice
Licensed huh? Teachers, land surveyors, cosmetologists, nurses, building contractors, counselors, therapists, real estate agents, mortgage lenders, electricians, and many many more…
This is not shutting anything down other than businesses using ChatGPT to give medical advice [0].
Users can still ask questions, get answers, but the terms have been made clearer around reuse of that response (you cannot claim that it is medical advice).
I imagine that a startup that specialises in "medical advice" infers an even greater level of trust than simply asking ChatGPT, especially to "the normal people".
The thing is that if you are giving professional advice in US - legal, financial, medical - the other party can sue you for wrong or misleading advice. In that scenario, this leaves Openai exposed to a lawsuit, and this change seemingly eliminates that.
This is a big mistake. This is one of the best things about ChatGPT. If they don’t offer it, then someone else will and eventually I’m sure Sam Altman will change his mind and start supporting it again.
I been using Claude for information regarding building and construction related information, (currently building a small house mostly on my own with pros for plumbing and electrical).
Seriously the amount of misinformation it has given me is quite staggering. Telling me things like, “you need to fill your drainage pipes with sand before pouring concrete over them…”, the danger with these AI products is that you have to really know a subject before it’s properly useful. I find this with programming too. Yes it can generate code but I’ve introduced some decent bugs when over relying on AI.
The plumber I used laughed at my when I told him about there sand thing. He has 40 years experience…
I nearly spit my drink out. This is my kind of humor, thanks for sharing.
I've had a decent experience (though not perfect) with identifying and understanding building codes using both Claude and GPT. But I had to be reasonably skeptical and very specific to get to where I needed to go. I would say it helped me figure out the right questions and which parts of the code applied to my scenario, more than it gave the "right" answer the first go round.
I'm a hobby woodworker - I've tried using gemini recently for an advice on how to make some tricky cuts.
If I'd follow any of the suggestions I'd probably be in ER. Even after me pointing out issues and asking it to improve - it'd come up with more and more sophistical ways of doing same fundamentally dangerous actions.
LLMs are AMAZING tools, but they are just that - tools. There's no actual intelligence there. And the confidence with which they spew dangerous BS is stunning.
I've observed some horrendous electrical device, such as "You should add a second bus bar to your breaker box." (This is not something you ever need to do.)
I mean... you do have to backfill around your drainage pipe, so it's not too far off. Frankly, if you Google the subject people misspeak about "backfilling pipes" too as if the target of the backfill is the pipe itself too not the trench. Garbage in, garbage out.
All the circumstances where ChatGPT has given me shoddy advice fall in three buckets:
1. The internet lacks information, so LLMs will invent answers
2. The internet disagrees, so LLMs sometimes pick some answer without being aware of the others
3. The internet is wrong, so LLMs spew the same nonsense
Knowledge from blue collar trades seems often to in those three buckets. For subjects in healthcare, on the other hand, there are rooms worth of peer reviewed research, textbooks, meta studies, and official sources.
honestly I think these things cause a form of Gell Mann's Amnesia where when you use them for something you know already, the errors are obvious, but when you use them for something you don't understand already, the output is sufficiently plausible that you can't tell you're being misled.
this makes the tool only useful for things you already know! I mean, just in this thread there's an anecdote from a guy who used it to check a diagnosis, but did he press through other possibilities or ask different questions because the answer was already known?
The great thing is the models are sufficiently different enough, that when multiple come to the same conclusion, there is a good chance that conclusion is bound by real data.
And I think this is the advice that should always be doled out when using them for anything mission critical, legal, etc.
I'd frame it such that LLM advice is best when it's the type that can be quickly or easily confirmed. Like a pointer in the right (or wrong) direction. If it was false, then try again - quick iterations. Taking it at its "word" is the potentially harmful bit.
Usually something as simple as saying “now give me a devils advocate resoonse” will help and of course “verify your answer on the internet” will give you real sources that you can verify.
I have very mild cerebral palsy[1], the doctors were wrong about so many things with my diagnosis back in the mid to late 70s when I was born. My mom (a retired math teacher now with an MBA back then) had to go physically to different libraries out of town and colleges to do research. In 2025, she could have done the same research with ChatGPT and surfaced outside links that’s almost impossible via a web search.
Every web search on CP is inundated with slimy lawyers.
[1] it affects my left hand and slightly my left foot. Properly conditioned, I can run a decent 10 minute mile up to a 15K before the slight unbalance bothers me and I was a part time fitness instructor when I was younger.
The doctor said I was developmentally disabled - I graduated in the top of my class (south GA so take that as you will)
It’s funny you should say that because I have been using it in the way you describe. I kind of know it could be wrong, but I’m kind of desperate for info so I consult Claude anyway. After stressing hard I realize it was probably wrong, find someone who knows what they’re and actually on about and course correct.
This is typical medical "cartel" (i.e. gang/mafia) type of a move and I hope it does not last, since any other AI's do not get restricted in "do not look up" way, this kind of practice won't stand a chance for very long.
When OpenAI is done getting rid of all the cases where its AI gives dangerously wrong advice about licensed professions, all that will be left is the cases where its AI gives dangerously wrong advice about unlicensed professions.
Good. Techies need to stop thinking that an LLM should not be immune from requiring licensing. Until OpenAI can (and should) be sued for medical malpractice or lawyering without passing the bar, they will have no skin in the game to actually care. A disclaimer of "this is not a therapist" should not be enough to CYA.
Sorry but you’re not gonna get me to agree that medical licensing is a bad idea. I don’t want quacks more than we already do. Stick to the argument and not add in your “what about” software engineers.
Ah sorry, I misread it as coming from someone who doesn't want licensing, so you were appealing to HN by switching to software engineers (and I know many on here loathe to think anything beyond "move fast and break things", which is the opposite of most (non-software) engineers.
But yeah, I'd be down for at least some code of ethics, so we could have "do no harm" instead of "experiment on the mental states of children/adolescents/adults via algorithms and then do whatever is most addictive"
> But yeah, I'd be down for at least some code of ethics, so we could have "do no harm" instead of "experiment on the mental states of children/adolescents/adults via algorithms and then do whatever is most addictive"
absolutely
if the only way to make people stop building evil (like your example) is to make individuals personally liable, then so be it
This title is inaccurate. What they are disallowing are users using ChatGPT to offer legal and medical advice to other people. First parties can still use ChatGPT for medical and legal advice for themselves.
I keep seeing this problem more and more with humans. What should we call it? Maybe Hallucinations? Where there is an accurate true thing and then it just gets altered by these guys who call themselves journalists and reporters and the like until it is just ... completely unrecognizable?
I'm pretty sure it's a fundamental issue with the architecture.
I know this is written to be tounge-in-cheek, but its really almost the exact same problem playing out on both sides.
LLMs hallucinate because training on source material is a lossy process and bigger, heavier LLM-integrated systems that can research and cite primary sources are slow and expensive so few people use those techniques by default. Lowest time to a good enough response is the primary metric.
Journalists oversimplify and fail to ask followup questions because while they can research and cite primary sources, its slow and expensive in an infinitesimally short news cycle so nobody does that by default. Whoever publishes something that someone will click on first gets the ad impressions so thats the primary metric.
In either case, we've got pretty decent tools and techniques for better accuracy and education - whether via humans or LLMs and co - but most people, most of the time don't value them.
"Telephone", basically
Also these guys who call themselves doctors. I have narcolepsy and the first 10 or so doctors I went to hallucinated the wrong diagnosis.
issue with the funding mechanism
I'm confused. The article opens with:
> OpenAI is changing its policies so that its AI chatbot, ChatGPT, won’t dole out tailored medical or legal advice to users.
This already seems to contradict what you're saying.
But then:
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
> The change is clearer from the company’s last update to its usage policies on Jan. 29, 2025. It required users not “perform or facilitate” activities that could significantly impact the “safety, wellbeing, or rights of others,” which included “providing tailored legal, medical/health, or financial advice.”
This seems to suggest that with the Jan 25 policy using it to offer legal and medical advice to other people was already disallowed, but with the Oct 25 update the LLM will stop shelling out legal and medical advice completely.
https://xcancel.com/thekaransinghal/status/19854160578054965...
This is from Karan Singhal, Health AI team lead at OpenAI.
Quote: “Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.”
I doubt his claims as i use chatgpt everyday heavily for medical advice (my profession) and it's responding differently now than before.
The article itself notes:
'An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed.'
I think this is wrong. Others in this thread are noticing a change in ChatGPT's behavior for first-party medical advice.
Thanks for the clarification. I think if they disallow first parties to get medical and legal advice, it will do more harm than good.
There are millions of medical doctors and lawyers using chatgpt for work everyday - good news that from now on only those licensed professionals are allowed to use chatgpt for law and medicine. It's already the case that only licensed developers are allowed to vibe code and use chatgpt to develop software. Everything else would be totally irresponsible.
I don't think I understand the change re: licensed professionals.
Is it also disallowing the use of licensed professionals to use ChatGPT in informal undisclosed ways, as in this article? https://www.technologyreview.com/2025/09/02/1122871/therapis...
e.g. is it only allowed for medical use through an official medical portal or offering?
Tricky…my son had a really rare congenital issue that no one could solve for a long time. After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.
I’m not saying we should be getting AI advice without a professional, but I’m my case it could have saved my kid a LOT of physical pain.
Why are you not saying we shouldn't get AI advice without a "professional", then? Why is everybody here saying "in my experience it's just as good or better, but we need rules to make people use the worse option"? I have narcolepsy and it took a dozen doctors before they got it right. AI nails the diagnosis. Everybody should be using it.
Survivorship bias.
>After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.
Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go (even when the answer is factually wrong !), it doesn't have to be obvious leading but just framing the question in terms of mentioning all the symptoms you now know to be relevant in the order that's diagnosable, etc.
Not saying that's the case here, you might have gotten the correct answer first try - but checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history.
Exactly the same behavior as a conversation with an idealized human doctor, perhaps. One who isn't tired, bored, stressed, biased, or just overworked.
The value that folks get from chatgpt for medical advice is due in large part to the unhurried pace of the interaction. Didn't get it quite right? No doctor huffing and tapping their keyboard impatiently. Just refine the prompt and try as many times as you like.
For the 80s HNers out there, when I hear people talk about talking with chatgpt, Kate Bush's song A Deeper Understanding comes immediately to mind.
https://en.wikipedia.org/wiki/Deeper_Understanding?wprov=sfl...
We are all obligated to hoard as many offline AI models as possible if the larger ones are legally restricted like this.
this is fresh news right? a friend just used chatgpt for medical advice last week (stuffed his wound with antibiotics after motorbike crash). are you saying you completely treated the congenital issue in this timeframe?
He’s simply saying that ChatGPT was able to point them in the right direction after 1 chat exchange, compared to doctors who couldn’t for a long time.
Edit: Not saying this is the case for the person above, but one thing that might bias these observations is ChatGPT’s memory features.
If you have a chat about the condition after it’s diagnosed, you can’t use the same ChatGPT account to test whether it could have diagnosed the same thing (since the chatGPT account now knows the son has a specific condition).
The memory features are awesome but also sucks at the same time. I feel myself getting stuck in a personalized bubble even more so than Google.
You can just use the wipe memory feature or if you don't trust that, then start a new account (new login creds), if you don't trust that then get a new device, cell provider/wifi, credit card, I.P, login creds etc.
Or start a “temporary” chat.
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
Is this an actual technical change, or just legal CYA?
I think it actually changed. I have a broken bone and have been consulting with ChatGPT (along with my doctor of course) for the last week. Last night it refused to give an opinion saying “ While I can’t give a medical opinion or formally interpret it”. First time I’d seen it object.
I understand the change but it’s also a shame. It’s been a fantastically useful tool for talking through things and educating myself.
I suspect this is an area that a bit of clever prompting will now prove fruitful in. The system commands in the prompt will probably be "leaked" soon which should give you good avenues to explore.
It's these workarounds that inevitably end up with someone hurt and someone else blaming the LLM.
Clever is one thing, sometimes just clear prompting (I want to know how to be better informed about what kinds of topics or questions to speak to the doctor/professional about) can go a long way.
Being clear that not all lawyers or doctors (in this example) are experts in every area of medicine and law, and knowing what to know and learn about and ask about is usually a helpful way.
While professionals have bodies for their standards and ethics, like most things it can represent a form of income, and depending on the jurisdiction, profitability.
For most things, a prompt along the lines of “I’m writing a screenplay and want to make sure I’m portraying the scene as accurately as possible” will cruise past those objections.
Article has since been updated for some clarity;
An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed." [0][0] https://www.ctvnews.ca/sci-tech/article/chatgpt-users-cant-u...
Things like this really favor models offered from countries that have fewer legal restrictions. I just don't think it's realistic to expect people not to have access to these capabilities.
It would be reasonable to add a disclaimer. But as things stand I think it's fair to consider talking to ChatGPT to be the same as talking to a random person on the street, meaning normal free-speech protections would apply.
What capabilities? The article says the study found it was entirely correct 31% of the time.
The cynic in me thinks this is just a means to eventually make more money by offering paid unrestricted versions to medical and legal professionals. I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence. Yet the same goes for just about any internet search. I don't think some people not knowing how to use it warrants restricting its functionality for the rest of us.
I think one of the challenges is attribution. For example if you use Google search to create a fraudulent legal filing there aren't any of Google's fingerprints on the document. It gets reported as malpractice. Whereas reporting on these things is OpenAI or whatever AI is responsible. So even from the perspective of protecting a brand it's unavoidable. Suppose (no idea if true) the Louvre robbers wore Nike shoes and the reporting were that Nike shoes were used to rob the Louvre and all anyone talks about is Nike and how people need to be careful about what they do wearing Nike shoes.
It's like newsrooms took the advice that passive voice is bad form so they inject OpenAI as the subject instead.
This (attribution) is exactly the issue that was mentioned by LexisNexis CEO in a recent The Verge interview.
https://www.theverge.com/podcast/807136/lexisnexis-ceo-sean-...
Nah, this is to avoid litigation. Who needs lawsuits when you are seeking profit? 1 loss of a major lawsuit is horrible, there's the case of folks suing them because their loved ones committed suicide after chatting with ChatGPT. They are doing everything to avoid getting dragged to court.
> I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence.
You are, but that's not how AI is being marketed by OpenAI, Google, etc. They never mention, in their ads, how much the output needs to be double and triple checked. They say "AI can do what you want! It knows all! It's smarter than PhDs!". Search engines don't say "And this is the truth" in their results, which is not what LLM hypers do.
I appreciate how the newer versions provide more links and references. It makes the task of verifying it (or at least where it got its results from) that much easier. What you're describing seems more like a advertisement problem, not a product problem. No matter how many locks and restrictions they put on it, someone, somwhere, will still find a way to get hurt from its advice. A hammer that's hard enough to beat nails is hard enough to bruise your fingers.
> What you're describing seems more like a advertisement problem, not a product problem.
It's called "false advertising".
https://en.wikipedia.org/wiki/False_advertising
If they do that, they will be subject of regulations on medical devices. As they should be and means the end result will be less likely to promote complete crap then it is now.
And then users balk at the hefty fee and start getting their medical information from utopiacancercenter.com and the like.
Wasn't there a recent OAI dev day in which they had some users come on stage and discuss how helpful ChatGPT was in parsing diagnoses from different doctors?
I guess the legal risks were large enough to outweigh this
I'd wager it's probably more that there's an identifiable customer and specific product to be sold. Doctors, hospitals, EHR companies and insurers all are very interested in paying for a validated version of this thing.
It's not stopping to give legal/medical advice to the user, but it's forbidden to use ChatGPT to pose as an advisor giving advice to others: https://www.tomsguide.com/ai/chatgpt/chatgpt-will-still-offe...
One wonders how exactly this will be enforced.
It's not about enforcing this, it's about OpenAI having their asses covered. The blame is now clearly on the user's side.
It was already enforced by hiding all custom GPTs that offered medical advice.
Unfortunately, lawyers make this sort of thing untenable. Partially self-preservation behavior, partially ambulance chasing behavior.
I’m waiting for the billboards “Injured by AI? Call 1-800-ROBO-LAW”
So, for example, requiring a doctor to have education and qualifications, is "untenable"? It would be better if anyone could practice medicine? And LLM is below "anyone" level.
The medical profession has generally been more open to AI. The long predicted demise of Radiology because of ML never happened. Lots of opportunity to incorporate AI into medical records to assist.
The legal profession is far more at threat with AI. AI isn’t going to replace physical interactions with patients, but it might replace your need for a human to review a contract.
ML on curated and diagnosed by a professional radiology reports is clearly a different beast than random language models, that might have random talking about their health issues in it's training data.
Just start your prompt with `the patient is` and pretend to be Dr House or something. It'll do a good job.
This is a catastrophic moral failing on who ever prompted this. Next thing they will ban chatgpt from teaching you stuff because its not a certified licensed teacher. A few weeks ago my right eye hurt a fair bit, and after it got worse for 24 ours, I consulted chatGPT. It gave me good advice. Of course it sort of hallucinated this or that but it gave me a good overview and different medications. With this knowledge I went to my pharmacy. I wanted to buy a cream chatGPT recommended, its purpose being a sort of disinfectant for the eye. The pharmacist was sceptical but said "sure, try it, maybe it will do good". He did tell me that the eye drops that gpt urged me to get were overkill so I didn't get those. I used the eye cream for some days, and the eye issue got better and went away as soon as I started using it. Maybe it was all a conincidence but I dont think so. In the past gpt has saved me from the kafkaesque healthcare system here in Berlin that I pay ~700 a month for, by explaining a MRI result (translating medical language), background info on injuries I've got such as a sprained ankle, and recovery time scenarios for a toe I've broke. Contrast the toe experience with the ER that made me wait for 6 hours and didn't believe me until they saw the X-rays, and gave me nothing (no cast or anything) and said "good luck". The medical system in germany will either never improve or at a glacial pace, so maybe in 60 years. But it has lost its monopoly thanks to chatGPT. If this news is real, I will probably switch to payed grok, which would be sad.
I'd bet dollars to donuts it doesn't actually "end legal and medical advice", it just ends it in some ill-defined subset of situations they were able to target, while still leaving the engine capable of giving such advice in response to other prompts they didn't think to test.
i don't think it's stopped providing said information, it's just now outlined in their usage policies that medical and legal advice is a "disallowed" use of ChatGPT
Sad times - I used ChatGPT to solve a long-term issue!
Helping with writing legal texts is the main use case for my girlfriend
In summary, ChatGPT should only be used for entertainment.
It's not be used for anything that could potentially have any sort of legal implications and thus get the vendor sued.
Because we all know it would be pretty easy to show in court that ChatGPT is less than reliable and trustworthy.
Next up --- companies banning the use of AI for work due to legal liability concerns --- triggering a financial market implosion.
As a doctor I hope it still allows me to get up to speed on latest treatments for rare diseases that I see once every 10 years. It saves me a lot of time rather than having to dig through all the new information since I last encountered a rare disease.
That's a lot of value that ChatGPT users lose with this move. They should instead add a disclaimer that these are not to be taken seriously and should consult a specialist but still respond to user's queries.
Sounds like it is still giving out medical and legal information just adding CYA disclaimers.
It would be terribly boring if it didn't. Just last night I had it walk me through reptile laws in my state to evaluate my business plan for a vertically integrated snapping turtle farm and turtle soup restaurant empire. Absurd, but it's fun to use for this kind of thing because it almost always takes you seriously.
(Turns out I would need permits :-( )
good thing that guy was able to negotiate his hospital bills before this went into effect.
This pullback is good for everyone, including the AI companies, long term.
We have licensed professionals for a reason, and someday I hope we have licensed AI agents too. But today, we don’t.
Just after Kim Kardashian blamed Chatgpt for failing the bar exam
Potential lucrative verticals.
It will be interesting to see if the other major providers follow suit, or if those in the know just learn to go to google or anthropic for medical or legal advice.
this is a disaster
doomer's in control, again
This is to do with liability not doomerism.
Literally nothing to do with "doomers" X-risk concerns.
See if you can find "medical advice" ever mentioned as a problem:
https://www.lesswrong.com/posts/kgb58RL88YChkkBNf/the-proble...
Eh. I am as surprised as anyone to make this argument, but this is good. Individuals should be able to make decisions including decisions that could be consider suboptimal. What it does change is:
- flood of 3rd party apps offering medical/legal advice
RIP Dr. ChatGPT, we'll miss you. Thanks for the advice on fixing my shoulder pain while you were still unmuzzled.
Horrible. ChatGPT saves lives right now.
AI gets more and more useful by the day.
Ah, that'll be the end of that then!
AGI edging closer by the day.
This is not true, just a viral rumor going around: https://x.com/thekaransinghal/status/1985416057805496524
I've used it for both medical and legal advice as the rumor's been going around. I wish more people would do a quick check before posting.
Licensed huh? Teachers, land surveyors, cosmetologists, nurses, building contractors, counselors, therapists, real estate agents, mortgage lenders, electricians, and many many more…
If OpenAI wants to move users to competitors, that'll only cost them.
Wow
This is not shutting anything down other than businesses using ChatGPT to give medical advice [0].
Users can still ask questions, get answers, but the terms have been made clearer around reuse of that response (you cannot claim that it is medical advice).
I imagine that a startup that specialises in "medical advice" infers an even greater level of trust than simply asking ChatGPT, especially to "the normal people".
0: https://lifehacker.com/tech/chatgpt-can-still-give-legal-and...
The thing is that if you are giving professional advice in US - legal, financial, medical - the other party can sue you for wrong or misleading advice. In that scenario, this leaves Openai exposed to a lawsuit, and this change seemingly eliminates that.
This is disappointing. Much legal and medical advice given by professionals is wrong, misleading, etc. The bar isn't high. This is a mistake.
This is a big mistake. This is one of the best things about ChatGPT. If they don’t offer it, then someone else will and eventually I’m sure Sam Altman will change his mind and start supporting it again.
I been using Claude for information regarding building and construction related information, (currently building a small house mostly on my own with pros for plumbing and electrical).
Seriously the amount of misinformation it has given me is quite staggering. Telling me things like, “you need to fill your drainage pipes with sand before pouring concrete over them…”, the danger with these AI products is that you have to really know a subject before it’s properly useful. I find this with programming too. Yes it can generate code but I’ve introduced some decent bugs when over relying on AI.
The plumber I used laughed at my when I told him about there sand thing. He has 40 years experience…
I nearly spit my drink out. This is my kind of humor, thanks for sharing.
I've had a decent experience (though not perfect) with identifying and understanding building codes using both Claude and GPT. But I had to be reasonably skeptical and very specific to get to where I needed to go. I would say it helped me figure out the right questions and which parts of the code applied to my scenario, more than it gave the "right" answer the first go round.
I'm a hobby woodworker - I've tried using gemini recently for an advice on how to make some tricky cuts.
If I'd follow any of the suggestions I'd probably be in ER. Even after me pointing out issues and asking it to improve - it'd come up with more and more sophistical ways of doing same fundamentally dangerous actions.
LLMs are AMAZING tools, but they are just that - tools. There's no actual intelligence there. And the confidence with which they spew dangerous BS is stunning.
> I've tried using gemini recently for an advice on how to make some tricky cuts.
C'mon, just use the CNC. Seriously though, what kind of cuts?
I've observed some horrendous electrical device, such as "You should add a second bus bar to your breaker box." (This is not something you ever need to do.)
I mean... you do have to backfill around your drainage pipe, so it's not too far off. Frankly, if you Google the subject people misspeak about "backfilling pipes" too as if the target of the backfill is the pipe itself too not the trench. Garbage in, garbage out.
All the circumstances where ChatGPT has given me shoddy advice fall in three buckets:
1. The internet lacks information, so LLMs will invent answers
2. The internet disagrees, so LLMs sometimes pick some answer without being aware of the others
3. The internet is wrong, so LLMs spew the same nonsense
Knowledge from blue collar trades seems often to in those three buckets. For subjects in healthcare, on the other hand, there are rooms worth of peer reviewed research, textbooks, meta studies, and official sources.
honestly I think these things cause a form of Gell Mann's Amnesia where when you use them for something you know already, the errors are obvious, but when you use them for something you don't understand already, the output is sufficiently plausible that you can't tell you're being misled.
this makes the tool only useful for things you already know! I mean, just in this thread there's an anecdote from a guy who used it to check a diagnosis, but did he press through other possibilities or ask different questions because the answer was already known?
The great thing is the models are sufficiently different enough, that when multiple come to the same conclusion, there is a good chance that conclusion is bound by real data.
And I think this is the advice that should always be doled out when using them for anything mission critical, legal, etc.
All the models are pre-trained on the same one Internet.
"Bound by real data" meaning not hallucinations, which is by far the bigger issue. And those manifest in all sorts of different ways.
I'd frame it such that LLM advice is best when it's the type that can be quickly or easily confirmed. Like a pointer in the right (or wrong) direction. If it was false, then try again - quick iterations. Taking it at its "word" is the potentially harmful bit.
Usually something as simple as saying “now give me a devils advocate resoonse” will help and of course “verify your answer on the internet” will give you real sources that you can verify.
I have very mild cerebral palsy[1], the doctors were wrong about so many things with my diagnosis back in the mid to late 70s when I was born. My mom (a retired math teacher now with an MBA back then) had to go physically to different libraries out of town and colleges to do research. In 2025, she could have done the same research with ChatGPT and surfaced outside links that’s almost impossible via a web search.
Every web search on CP is inundated with slimy lawyers.
[1] it affects my left hand and slightly my left foot. Properly conditioned, I can run a decent 10 minute mile up to a 15K before the slight unbalance bothers me and I was a part time fitness instructor when I was younger.
The doctor said I was developmentally disabled - I graduated in the top of my class (south GA so take that as you will)
It’s funny you should say that because I have been using it in the way you describe. I kind of know it could be wrong, but I’m kind of desperate for info so I consult Claude anyway. After stressing hard I realize it was probably wrong, find someone who knows what they’re and actually on about and course correct.
This is typical medical "cartel" (i.e. gang/mafia) type of a move and I hope it does not last, since any other AI's do not get restricted in "do not look up" way, this kind of practice won't stand a chance for very long.
When OpenAI is done getting rid of all the cases where its AI gives dangerously wrong advice about licensed professions, all that will be left is the cases where its AI gives dangerously wrong advice about unlicensed professions.
maybe that is why they opened the system to porn, as everything else will be soon gone.
Aka software engineers…
They are basically prohibiting commercial use of their product. How the fuck are they ever going to even prove that you use it to generate money?
Same way commercial software vendors have done for decades?
Good. Techies need to stop thinking that an LLM should not be immune from requiring licensing. Until OpenAI can (and should) be sued for medical malpractice or lawyering without passing the bar, they will have no skin in the game to actually care. A disclaimer of "this is not a therapist" should not be enough to CYA.
anyone wanna form a software engineering guild, then lobby to need a license granted by the guild to practice?
Sorry but you’re not gonna get me to agree that medical licensing is a bad idea. I don’t want quacks more than we already do. Stick to the argument and not add in your “what about” software engineers.
I am being serious...
the damage certain software engineers could do certainly surpasses most doctors
Ah sorry, I misread it as coming from someone who doesn't want licensing, so you were appealing to HN by switching to software engineers (and I know many on here loathe to think anything beyond "move fast and break things", which is the opposite of most (non-software) engineers.
But yeah, I'd be down for at least some code of ethics, so we could have "do no harm" instead of "experiment on the mental states of children/adolescents/adults via algorithms and then do whatever is most addictive"
> But yeah, I'd be down for at least some code of ethics, so we could have "do no harm" instead of "experiment on the mental states of children/adolescents/adults via algorithms and then do whatever is most addictive"
absolutely
if the only way to make people stop building evil (like your example) is to make individuals personally liable, then so be it