They must be protected from lawsuits. If not, they're headed for rude awakenings.
This morning I was checking Medicare provider ratings. The browser initiated an AI module which later asserted that a particular medical provider was rated 5 of 5 stars. It was surprisingly emphatic about how that was the "best rating". But I knew the rating for that provider to truly be 3.6 (of 5) and so disabled the AI module.
True title: ‘From taboo to tool’: 30% of GPs in UK use AI tools in patient consultations, study finds
Quote:
Almost three in 10 GPs in the UK are using AI tools such as ChatGPT in consultations with patients, even though it could lead to them making mistakes and being sued, a study reveals.
They must be protected from lawsuits. If not, they're headed for rude awakenings.
This morning I was checking Medicare provider ratings. The browser initiated an AI module which later asserted that a particular medical provider was rated 5 of 5 stars. It was surprisingly emphatic about how that was the "best rating". But I knew the rating for that provider to truly be 3.6 (of 5) and so disabled the AI module.
I'm happy with the truth.
True title: ‘From taboo to tool’: 30% of GPs in UK use AI tools in patient consultations, study finds
Quote:
Almost three in 10 GPs in the UK are using AI tools such as ChatGPT in consultations with patients, even though it could lead to them making mistakes and being sued, a study reveals.
This won't work in the USA without a price to be paid.
We don't have government healthcare. Malpractice is a legitimate concern for US doctors.
Insurance companies may refuse to cover this misuse of AI or they may charge a higher premium for a doctor who insists on doing so.
I expect similar will apply in lots of other businesses.
Replying on a tool that is widely known to be flawed is a textbook definition of negligence and a lawsuit waiting to happen.