A neurology ICU nurse on AI in hospitals

(codastory.com)

108 points | by redrove 3 days ago ago

141 comments

  • lekanwang 3 days ago

    As an investor in healthcare AI companies, I actually completely agree that there's a lot of bad implementations of AI in healthcare settings, and what practitioners call "alarm fatigue" as well as the feeling of loss of agency is a huge thing. I see a lot of healthcare orgs right now roll out some "AI" "solution" in isolation that raises one metric of interest, but fails to measure a bunch of other systemic measures.

    Two thoughts: 1: I think the industry could take cues from aerospace and the human factors research that's drastically improved safety there -- autopilot and autoland systems in commercial airliners are treated as one part of a holistic system with the pilot and first officer and flight attendants in keeping the plane running smoothly. Too few healthcare AI systems are evaluated holistically.

    2: Similarly, if you're going to roll out a system, either there's staff buy-in, or the equilibrium level of some kind of quality/outcomes/compliance measure should increase that justifies staff angst and loss of agency. Not all AI systems are bad. One "AI" company we invested in, Navina, is actually loved by physicians using them, but the team also spent a LOT of time doing UX research and feedback with actual users and the support team is always super responsive.

    • hartspear 2 days ago

      Agreed, what was obviously missing from this person's experience was an opportunity to guide deployment in ways that work well for their understanding of the workflow. I think companies often fail to explore what actually happens on the ground of their organizations and make assumptions that are not accurate. AI could be a huge help to nurses but it takes conversations and direct insight during development to get it right.

  • theptip 3 days ago

    > As a nurse, you end up relying on intuition a lot. It’s in the way a patient says something, or just a feeling you get from how they look

    There is a longstanding tension between those who believe human intuition is trustworthy, and the “checklist manifesto” folks. Personally I want room for both, there are plenty of cases where for example the nurse/doctor’s intuition fails and they forget to ask about travel or outdoor activities and miss some obvious tropical disease, or something situational like Lyme’s.

    I’ve spent a fair amount of time in a hospital and the human touch is really invaluable. My hope is that AI can displace the busywork and leave nurses more time to do the actual care.

    But a concrete example of the thing an AI will struggle with is looking at the overlapping pain med schedule, spotting that the patient has not been exhibiting or complaining of pain, and delaying one med a couple hours from the scheduled time to make the night schedule more pleasant for the patient. It’s hard to quantify the tradeoffs here! (Maybe you could argue the patient should be given a digital menu to request this kind of thing…)

    • RHSeeger 2 days ago

      It's interesting to me because AI and intuition serve some of the same purpose; to help the person being served find the answer. And both have similar limitations in that you need to verify what they're telling you.

      - If your gut tells you it's Lyme Disease, you don't just check it off as Lyme Disease and call it a day. You run tests to find out if it is

      - If the AI tells you it it's Lyme Disease, you don't just check it off as Lyme Disease and call it a day. You run tests to find out if it is

      AI should (almost?) never be used as the system of record. But it can be amazing in saving time; by guiding you to the right answer.

    • edanm a day ago

      > There is a longstanding tension between those who believe human intuition is trustworthy, and the “checklist manifesto” folks.

      I really think this is a false dichotomy. Both are needed.

      I even vaguely remember from the Checklist Manifesto book, that one of the checklist items was specifically designed to make sure people actually talk to each other and share plans/thoughts/concerns before things like surgeries, which seems like a great way to make sure that intuition gets heard.

    • 8338550bff96 2 days ago

      None of this has to do with. AI. At all.

      This is politics and policy

  • ilaksh 2 days ago

    This is a problem with management, not AI.

    The acuity system obviously doesn't work well and wasn't properly rolled out. It's clear that they did not even explain how it was supposed to work. That's a problem with that system and it's deployment, not AI in general.

    Recording verbal conversations instead of making doctors and nurses always type things is surely the result of a massive portion of doctors saying that record keeping was too awkward and time intensive. It is not logical to assume that there is a privacy concern that overrides the time saving and safety aspect of doing that. People make that assumption because they are pre-conditioned against surveillance and are not considering physician burnout with record keeping systems.

    It's true that there are large gaps in AI capability and that software rollouts are quite difficult and poor implementation can cause a significant burden on medical professionals as it has here. I actually think if it's as bad as he says with the acuity then that puts patients in danger and should result in firings or lawsuits.

    But that doesn't mean that AI isn't useful and won't continue to become more useful.

    • RandomGuy4567 2 days ago

      I completelly agree with you. I am a Nurse and also work on AI research (deep learning).

      In my opinion most of the tension between healthcare profesionals and the diffferent AI systems out there boil down to poor management and shitty implementation.

      As "decision support tools" there should not be much mistrust or sense of fear towards AI.

      There are though some concerning isues regarding patient privacy, and employee surveilance/rights. Specially when these systems are sharing sensitive data with third parties. But this can (and provably should) be regulated.

      Nothing of this "mean that AI isn't useful and won't continue to become more useful". Right on point!

  • parasense 3 days ago

    I did a bunch of research essays into medical uses of AI/ML and I'm not terrified, in fact the single most significant use of these technologies is probably in or around healthcare. One of the most cited uses would be expert analysis of medical imaging, especially breast cancer imaging. There is a lot of context to unpack around breast cancer imaging, or more sucinctly put, controversial drama! The fact is there is a statisticalluy high rate of false positives in breast cancer diagnostics made by human doctors. This reality resulted in a big overall policy shift to have women breast scanned less often, depending on their age, or something like that. Because so many women were victimized with breast surgery that turned out to be false positive or whatever. The old saying to make an omlet one must break a few eges is sometimes used, and that's a terrible euphamism. AI has proven to be better at looking at medical image, and in the case of breast cancer seems to out perform humans. And of course the humans have a monotonous job revewing image after image, and they want to be safe instead of latter being sorry, so of course they have high false possitives. The machines never get tired, they never get biased (this is a bone of contention), and they never stop. Ultimatly a human doctor still has to review the images, and the machines simply inform if the doctor is being too agressive in diagnosis, or possibly missing something. The whole thing gets escellated if there is any disparity. The out come from early studdies is encouraging, but these studies take years, and are very expensive. One of the biggest problems is the technology proficiency of medical staff is low, and so we are now in a situation where software engineers are cross traning to be at the level of a nurse or even doctors in rare cases.

    • eesmith 2 days ago

      > AI has proven to be better at looking at medical image, and in the case of breast cancer seems to out perform humans

      FWIW, https://pmc.ncbi.nlm.nih.gov/articles/PMC11073588/ from 2024 Apr 4 ("Revolutionizing Breast Cancer Detection With Artificial Intelligence (AI) in Radiology and Radiation Oncology: A Systematic Review") says:

      "Presently, when a pre-selection threshold is established (without the radiologist's involvement), the performance of AI and a radiologist is roughly comparable. However, this threshold may result in the AI missing certain cancers.

      To clarify, both the radiologist and the AI system may overlook an equal number of cases in a breast cancer screening population, albeit different ones. Whether this poses a significant problem hinges on the type of breast cancer detected and missed by both parties. Further assessment is imperative to ascertain the long-term implications"

      and concludes

      "Given the limitations in the literature currently regarding all studies being retrospective, it has not been fully clear whether this system can be beneficial to breast radiologists in a real-time setting. This can only be evaluated by performing a prospective study and seeing in what situations the system works optimally. To truly gauge the system's effectiveness in real-time clinical practice, prospective studies are necessary to address current limitations stemming from retrospective data."

    • buffington 2 days ago

      One very important part your comment doesn't mention: a real human being has to actually take images for the AI to analyze.

      The amount of training a radiation technologist (the person who makes you put your body in uncomfortable positions when you break something) is significant. My partner has made a career of it, and the amount of school needed and clinical hours is non-trivial, and harder to do than becoming a nurse from what I understand.

      They need to know as much about bones as orthopedic surgeons while also knowing how radiation works, as well as how the entire imagining tech stack works, while also having the soft skills needed to guide injured/ill patients to do difficult things (often in the midst of medical trauma).

      The part where a doctor looks at images is really just a very small part of the entire "product." The radiologists who say "there's a broken arm" are never in the room, never see the patient, never have context. It's something that, frankly, an AI can do much more consistently and accurately at this point.

  • tqi 2 days ago

    > We didn’t call it AI at first. The first thing that happened was these new innovations just crept into our electronic medical record system. They were tools that monitored whether specific steps in patient treatment were being followed. If something was missed or hadn’t been done, the AI would send an alert. It was very primitive, and it was there to stop patients falling through the cracks.

    Journalists LOVED The Checklist Manifesto when it came out in 2009, I guess if you call it AI then they will hate it? Similarly, in the early 2020s intuition was bad because of implicit bias, but now I guess it is good?

  • paulnpace 3 days ago

    I think something this article demonstrates is how AI implementation is resulting in building resistance to AI because AI is being forced onto people instead of being demanded by those people. Typically, the people doing the forcing don't understand very well the job the people being forced to adopt AI actually perform.

  • throwaway4220 2 days ago

    “Physician burnout” from documentation was the excuse for AI adoption - Stop Citrix or VMware or whatever. make a responsive emr where you don’t have to click buttons like a monkey

    • bearjaws 2 days ago

      Epic and Cerner are your main enemies if reducing burn out is the problem. Even then, the continued consolidation and inflow of PE into healthcare will be the next big problems.

  • boohoo123 3 days ago

    100% agree AI will ruin healthcare. I'm an IT director at a rural mental health clinic and I see the push for AI across my state and it's scary what they want. All i can do is push back. Healthcare is a case by case personal connection, something AI can't do. It only reduces humans down to numbers and operates on that. There is no difference between healthcare AI to a web scraper on webmd or mayo clinic.

    • m463 2 days ago

      Your comment makes me think of a videogame I played recently called Eliza.

      (the zachtronics version, not the original text-based game from the 60s)

    • moralestapia 2 days ago

      I'm not vouching for AI, I actually thing it will only make things worse.

      But,

      >Healthcare is a case by case personal connection [...]

      I haven't felt this with doctors in like 20 years.

      • boohoo123 a day ago

        I get that but thats from the corporatization of clinics. They push for numbers not client well being. They want doctors to meet with x amount of patients a day and doctors dont control their schedule. So typically the execs meet together and say if we schedule every 20 minutes instead of 30 minutes we can generate x amount of more dollars. Which is why doctors just get you in and out as fast as possible because its their head on the chopping block if they dont meet the measures. Best thing you can do is find smaller clinics, which are becoming increasingly rare because the cost it takes to run one.

    • glonq a day ago

      Like any tool, the right amount of AI in the right places could be used to improve healthcare, and wrong/wrong could be used to ruin it.

      If your healthcare system is in the hands of politicians and/or capitalists, then expect the worst.

    • RandomGuy4567 2 days ago

      [dead]

  • cowmix 3 days ago

    This article feels “ripped from today’s headlines” for me, as my mother-in-law was recently in the ICU after a fall that caused head trauma. The level of AI-driven automated decision-making is unsettling, especially as it seems to allow large organizations to deflect accountability—“See? The AI made us do it!” I’m not entirely sure what guided her care—or lack thereof—but, as someone who frequently works in healthcare IT, I see these issues raised all the time.

    On the other hand, having access to my own “AI” was incredibly helpful during her incident. While in the ICU, speaking with her doctors, I used ChatGPT and Claude to become a better advocate for her by asking more informed questions. I could even take pictures of the monitors tracking her vitals, and ChatGPT helped me interpret the readings, which was surprisingly useful.

    In this “AI-first” world we’re heading into, individuals need their own tools to navigate the asymmetric power dynamic with large organizations. I wonder how long it will be until these public AI models get “tweaked” to limit their effectiveness in helping us question “the man.”

    • wing-_-nuts 3 days ago

      The article feels very much like a union rep fighting automation. If AI is provably worse, we should see that come up in the AI making 'bad calls' vs the human team. You would even see affects on health outcomes.

      One place I'd really like an all seeing eye AI overlord is in nursing home care. I have seen family members lie in filth with clear signs of an infection. I am confident, if we hadn't visited, seen this, and got her out of there she would have died there, years before her time.

      • FireBeyond 2 days ago

        Sadly, the one thing I took from my time as an EMT and paramedic was which nursing homes to consider and which to avoid. I filed more than one complaint with the DOH.

        It's a standing joke that whenever 911 crews respond to a nursing home, the report you'll get from staff will be a bingo game of:

        - "I just got on shift; I don't know why you were called."

        - "This is not my usual floor; I'm just covering while someone is on lunch. I don't know why you were called."

        - [utterly unrealistic set of vitals, in either direction, healthy, lively vitals for someone who is not thriving, or "should be unconscious" vitals for someone lively and spry]

        - [extended time waiting for patient notes, history, an outdated med list with everything they've taken in their ten years at the facility]

        And so on.

        I (generally) don't blame the floor staff (though some things, as you describe, are inexcusable) but management/ownership. The same management/ownership that has the policy to call 911 for anything more involved than a bandaid for some weird idea of managing liability, nurses that "aren't allowed" to do several interventions that they can, for the same reason, all the while the facility has a massive billboard out the front advertising "24/7 nursing care" (and fees/costs commensurate with that).

        • Sabinus 2 days ago

          Name and shame. In the absence of adequate government protections, only company reputation protects consumers from exploitation.

          • FireBeyond a day ago

            Too many to name and shame. However, most states do publicize SNF (skilled nursing facility) inspections, complaints and investigations. Washington does, and with a good amount of detail, including outcomes and findings.

        • wing-_-nuts 2 days ago

          Well, now I want to know, how do you pick a good one?

          • FireBeyond a day ago

            A lot easier when you're not on a tour and just have the ability to visit (fun fact, some facilities have socialization events around holidays, although that might sound callous). Staff mood. How many staff? If hallways are empty, just residents? Not good. Scents/odors. Yes, incontinence is a thing, not to be shamed. How you handle the cleaning, laundry of it, so it's not a pervasive scent? That's another.

            Most states will publish some reports. Washington's DOH will publish fairly detailed information on complaints against facilities, investigation details, and findings and outcomes.

      • throwaway290 2 days ago

        > If AI is provably worse, we should see that come up in the AI making 'bad calls' vs the human team. You would even see affects on health outcomes.

        That would require medicine not be a shitshow in general. Most things where cause-effect is not immediately obvious medicine has no idea even what is a good or bad call. So any study like this would be easy to cook in both directions

        I agree that using it for things like alerts though can be good

      • consteval 2 days ago

        The reality is that our economy and entire understanding of human society relies on labor. If we free humans from labor, they just die. Like you're depriving them of oxygen.

        Automation is great and all, and it's worked because we've been able to push humans higher and higher up the job ladder. But if, in the future, only highly specialized experts are valuable and better than AI, then a large majority of humanity will just be excluded from the economy all together.

        I'm not confident the average Joe could become a surgeon, even given perfect access to education. And I'm not even confident surgery won't be automated. Where does that leave us?

        • marcuskane2 2 days ago

          > Where does that leave us?

          Free to pursue our desires in a utopia.

          Humans used to work manual labor to produce barely enough food to survive, with occasional famines, and watch helplessly as half of their children died before adulthood.

          We automated farm labor, mining, manufacturing, etc so that one worker can now produce the output of 10, 100 or 100,000 laborers from a generation or two ago. Now those people work in new jobs and new industries that didn't previously exist.

          Today we're seeing the transition from automating physical labor to automating mental labor. Just as before, we'll see those workers move into new jobs and new industries that didn't exist before.

          Our society already spends 1000x more resources on children, elderly, disabled, unemployed, refugee, etc than would have been possible in the 1800s. The additional societal wealth creation from AI will mean that we can dedicate just a tiny portion of the surplus to provide universal basic income to everyone. (Or call it disability payments or housing assistance or welfare or whatever term if UBI doesn't resonate politically)

          • consteval 2 days ago

            Practically I think this is the only way forward. The previous solutions of pushing people "up" only works for so long. People are hard limited by what they're capable of - for example, I couldn't be a surgeon even if I wanted to. I'm just not smart enough and driven enough.

          • throwaway290 2 days ago

            > Free to pursue our desires in a utopia.

            In other words, useless. If one dies no one would care if no one depends on anyone except big tech products.

    • cowmix 3 days ago

      Side note: I tried the same questions with some local LLMs I’m running at home—unfortunately, they’re nowhere near as good or useful. I hope local models improve quickly, so we’re not left depending on the good graces of big LLM(tm).

  • heironimus 3 days ago

    This is the same technology story told thousands of times a day with nearly every technology. Medical seems to be especially bad at this.

    Take a very promising technology that could be very useful. Jump on it early without even trying to get buy in and without fully understanding the people that will use it. Then push a poor version of it.

    Now the nurses hate the tech, not the poor implementation of it. The techies then bypass the nurses because they are difficult, even though they could be their best resource for improvement.

  • taylodl 2 days ago

    AI is a tool. Doctors can use the tool to ensure they haven't overlooked anything. At the end of the day, it's still doctors who are practicing medicine and are responsible for treatment.

    Yes, there are a lot of bridges we need to cross with regards to the best practices for using semi-intelligent tools. These tools are in their infancy, so I expect there's going to be a lot we learn over the next five to ten years and a lot of policy and procedure that get put in place.

  • coliveira 3 days ago

    Everyone should be terrified. The "promise" of AI is the following: remove any kind of remaining communication between humans, because that is "inefficient", and replace it with an AI that will mediate all human interactions (in business and even in other areas). In a few years, AIs trained by big corps will run the show and humans will be required to interface with them to do anything of value. Similar to what they want to do nowadays with mobile/enterprise systems, but at a much deeper level.

    • chubot 3 days ago

      It's true, but corporate policies and insurance are already like "slow AI"

      They remove most of what's real in interactions

      I remember going for a routine checkup at Kaiser, and the doctor was literally checking boxes on her computer terminal, rather than looking, talking, listening.

      I dropped them after that -- it was pointless for me to go

      It seems like there are tons of procedures that already have to be followed, with little agency for doctors

      I've talked to doctors who say "well the insurance company say I should prescribe this before that, even if the other thing would be simpler". Even super highly paid doctors are sometimes just "following the rules"

      And more importantly they do NOT always understand the reasons for the rules. They just have to follow them

      ---

      To the people wondering about the "AI alignment problem" -- we're probably not going to solve that, because we failed to solve the easier "corporate alignment problem"

      It's a necessary prerequisite, but not sufficient, because AIs take corporate resources to create

      • danudey 2 days ago

        > I remember going for a routine checkup at Kaiser, and the doctor was literally checking boxes on her computer terminal, rather than looking, talking, listening.

        This is also a doctor issue, to be clear. My primary care physician has a program he uses on his laptop; I'm not sure what program it is, but he's been using it since I started going to him around 2009 so it's definitely not something new. He goes through and checks off boxes, as you described your doctor doing, but he also listens and makes suggestions.

        When I have an issue, he asks all the questions and checks off the boxes, but he's also listening to the answers. When I over-explain something, he goes into detail about why that is or is not (or may or may not) be relevant to the issue. He makes suggestions based on the medicine but also on his experiences. Seasonal affective disorder? You can get a lamp, you can take vitamin D, or you can go snowboarding up above the clouds. Exercise and sunlight both.

        For my psych checkups (ADHD meds and antidepressants) he goes through the standard score questionnaire (which every doctor I've seen uses), then fills in the scores I got into his app. Because of that he can easily see what my scores were the last time we spoke (about once every three months), so it's easy to see if something has changed dramatically or if things are relatively consistent.

        It seems as though it saves a lot of time compared to, say, paper charting, and while I have seen people complain on review sites that he's just checking stuff off on a form, I don't don't feel that it's actually impacting the quality of care I get, and it's good to know that he's going through the same process each time, making notes each time, and having all that information easily accessible for my next appointment.

        I should probably have prefaced all this by saying I'm in Canada, and so he's not being mandated by a private insurance company to follow a list just because the bureaucracy won't pay for your treatment if he doesn't. Maybe that makes it different.

        • chubot 2 days ago

          Yes definitely, the terminal doesn't necessarily imply doing a bad job... and I'm sure it can improve it. But I guess I'm saying I could see this unfortunate person being worn down by bureaucracy, not really engaged. Also I saw similar issues with the nurses.

    • JTyQZSnP3cQGa8B 2 days ago

      Most people who are not into computers see AI as the next step of computers, and they are actively waiting for it.

      I think that it’s very different from a computer which is a stupid calculator that frees us from boring mechanical tasks. AI replaces our thoughts and creativity which is IMHO a thousand times worse. Its aim is to replace humans while making them us more stupid since we won’t have to think anymore.

      • coliveira 2 days ago

        Yes, in the future most people will just say they don't want to think anymore because this is a task for computers, just like they nowadays don't want to do even a trivial calculation.

    • tivert 2 days ago

      > Everyone should be terrified. The "promise" of AI is the following: remove any kind of remaining communication between humans, because that is "inefficient", and replace it with an AI that will mediate all human interactions (in business and even in other areas).

      Kinda, that's the kind of enshittification customers/users can expect.

      The truly terrifying "promise" of AI is to free the ownership class from most of its need of labor. If the promise is truly realized, what labor remains will likely be so specialized and high-skill that huge numbers of people will be completely excluded from the economy.

      Almost all of us here are laborers, though many don't identify as such.

      Our society absolutely does not have the ideological foundations to accommodate mass amounts of unemployed people, especially at the top.

      The best outcome is "AI" hits a wall and is a flop like blockchain: really sexy demos, but ultimately falls far, far short of the hype.

      The worst outcome is Sam Altman builds an AGI, and he's not magnanimous enough to run soup kitchens and homeless shelters for us and our descendants, as he pursues egotistical mega-projects with his AI minions.

      • 2 days ago
        [deleted]
      • coliveira 2 days ago

        > The worst outcome is Sam Altman builds an AGI

        Sam Altman doesn't need to build an AGI for this process to happen. Companies already demonstrate that they're satisfied with a lame AI that work just barely enough to replace most workers.

        • danudey 2 days ago

          "It hallucinates facts and uses those to manufacture lies? How soon can we have it managing all of our customer interactions?"

    • teeray 3 days ago

      > remove any kind of remaining communication between humans, because that is "inefficient", and replace it with an AI that will mediate all human interactions

      I imagine that call center operators are salivating at this prospect. They can have an AI customers can yell at and it will calmly and cheerfully tell them (in a more "human-esque" way) to try rebooting their modem again, or visit the website to view their bill.

    • Melonotromo 2 days ago

      [dead]

    • anthonyskipper 3 days ago

      Some of us look forward to that future where you mostly just interact with AI. The one depressing us is not turning running over our goverment to AI. The sooner we can do that the better, you can't trust humans.

      • rtkwe 3 days ago

        That's still trusting humans, either the ones who created the AI and gave it it's goals/parameters or the humans that actually implement it's edicts. Can't get away from people, it's a lesson all the DAO hype squad learned quickly, fundamentally you still need people to implement the decisions.

      • croes 3 days ago

        If you don't trut humans you shouldn't trust AI-.

        AI is based on human input and has the same biases.

      • maxehmookau 3 days ago

        > Some of us look forward to that future where you mostly just interact with AI.

        What is it about that that appeals to you? I'm genuinely curious.

        A world without human interaction feels like a world I don't want to exist in.

        • andy_ppp 3 days ago

          If you are autistic (for example) I'm guessing human interaction can be extremely difficult and very stressful and triggering. Machines are much more amenable and don't have loads of arbitrary unwritten rules the way humans do. Maybe the idea of being entrapped by bureaucracy introduced by machines will be better than the the bureaucracy introduced by humans?

          • luxcem 3 days ago

            > Machines are much more amenable and don't have loads of arbitrary unwritten rules

            I'm sure system prompts of the most famous LLM are just that

            • andy_ppp 2 days ago

              They are not as arbitrary as body language for example.

          • add-sub-mul-div 3 days ago

            The difference between a standard human-written algorithm and machine learning is exactly that inability to find the rules transparent, predictable, and not arbitrary.

            • andy_ppp 2 days ago

              I can see this but I think humans are much more random than most LLMs - they lie, they have egos, they randomly dislike other humans and make things difficult for them. Never mind body language, networks of influence, reputation destruction and all the other things that people do to obtain power.

              I think LLMs are much more predictable and they will get better.

        • cptaj 3 days ago

          They expect AI bureaucracy to be more effective than human bureaucracy.

          I expect this to be entirely true in some cases.

      • 1986 3 days ago

        "You can't trust humans" but you can trust a non deterministic black box to take their place?

        • david-gpu 3 days ago

          Humans already are non-deterministic black boxes, so I'm not sure I would use that comparison.

          • f1shy 3 days ago

            For me they are more a gray box. That is why publicity and propaganda work.

          • epgui 3 days ago

            Humans are accountable. You can sue a human.

            • david-gpu 2 days ago

              And you can't sue the corporation that made an AI?

              • throwaway290 2 days ago

                In a world where all lawyers and prosecutors use its product for professional and private purposes?

              • epgui 2 days ago

                In theory yes, but good luck with that.

          • throwaway290 2 days ago

            What's better, a non-deterministic black box or a non-deterministic black box designed by non-deterministic black boxes?

        • saberience 3 days ago

          Are you suggesting humans are deterministic?

          • f1shy 3 days ago

            A little bit, we are. With some degree of confidence, given the incentives you can predict the output.

      • 015a 3 days ago

        You won't receive better outcomes in this world. The people in charge will simply change what they're measuring until the outcomes look better.

      • rvense 2 days ago

        What looks like turning things over to AI is really turning things over to the people who own the AI, which is another thing entirely.

      • itishappy 3 days ago

        Can we trust AI?

        • stego-tech 3 days ago

          Define “trust”, because that singular word carries immeasurable weight.

          Can we trust AI to make consistent predictions from its training data? Yeah, fairly reliably. Can we trust that data to be impartial? What about the people training the model, can we trust their impartiality? What about the investors bankrolling it, can we trust them?

          The more you examine the picture in detail, the less I think we’re able to state it’s trustworthy.

          • frm88 2 days ago

            >What about the investors bankrolling it, can we trust them?

            This is the crucial question. We live in capitalism and maximising profits is the most dominant axiom.

            Without any legislation in place anywhere in the world and AI-supported diagnosis and therapy very much already in place what would prevent the companies bankrolling the software/hardware from bricking the tool on various grounds, like payments not made, political reasons: Cory Doctorow wrote a blog-post in 2022 where John Deere bricked Ukrainian tractors [0]; or a manufacturer of respirators refused to repair the devices at the height of COVID, so that only Hackers enabled the devices to run and save human lifes? [1]

            [0] https://doctorow.medium.com/about-those-kill-switched-ukrain...

            [1] https://www.hackster.io/news/polish-hacker-shares-software-s...

        • randomdata 3 days ago

          Yes, we can. But should we?

      • axpvms 3 days ago

        [flagged]

    • A_D_E_P_T 3 days ago

      Counterpoint: AI is actually better at communication than most humans. In fact, even an ancient (in relative terms) article found that AI bots have better bedside manner than human doctors:

      https://www.theguardian.com/technology/2023/apr/28/ai-has-be...

      Today, I expect it's not even very close.

      I also believe that AI diagnostics are on average more accurate than the mean human doctor's diagnostic efforts -- and can be, in principle, orders of magnitude faster/better/cheaper.

      As of right now, there's even less gatekeeping with AIs than there is with humans. You'll jump through a lot of hoops and pay a lot of money for an opportunity to tell a doctor of your symptoms; you can do the same thing with GPT-4o and get a reasonable response in no time at all -- at and no cost.

      I'd much prefer, and I would be much better served, by a capable AI "medical assistant" and open access to scans, diagnostics, and pharmaceuticals [1] over the current paradigm in the USA.

      [1] - Here in Croatia, I can buy whatever drugs I want, with only very narrow exceptions, OTC. There's really no "prescription" system. I can also order blood tests and scans for myself.

      • croes 3 days ago

        AI is better in simulating communication but worse in understanding.

        >you can do the same thing with GPT-4o and get a reasonable response in no time at all -- at and no cost.

        Reasonable doesn't mean correct. Who is liable if it's the wrong answer?

        • A_D_E_P_T 3 days ago

          "AI" is basically a vast, curated, compressed database with a powerful index. If the database reflects the current state of the art, it'll have better understanding than the majority of human practitioners.

          You may say it will "simulate understanding" -- but in this case the simulation would be indistinguishable from the real thing, thus it would be the real thing. (Really "indiscernible" in the philosophical sense of the word.)

          > Reasonable doesn't mean correct. Who is liable if it's the wrong answer?

          I think that you can get better accuracy than with the average human doctor. Beyond that, my own opinion is that liability should be quisque pro se.

          • bangaroo 2 days ago

            > "AI" is basically a vast, curated, compressed database with a powerful index. If the database reflects the current state of the art, it'll have better understanding than the majority of human practitioners.

            But it's not. You're missing the point entirely and don't know what you're advocating for.

            A dictionary contains all the words necessary to describe any concept and rudimentary definitions to help you string sentences together but you wouldn't have a doctor diagnose someone's medical condition with a dictionary, despite the fact that it contains most if not all of the concepts necessary to describe and diagnose any disease. It's useful information, but not organized in a way that is conducive to the task at hand.

            I assume based on the way you're describing AI that you're referring to LLMs broadly, which, again, are spicy autocorrect. Super simplified, they're just big masses of understanding of what things might come in what order, what words or concepts have proximity to one another, and what words and sentences look like. They lack (and really cannot develop) the ability to perform acts of deductive reasoning, to come up with creative or new ideas, or to actually understand the answers they're giving. If they connect a bunch of irrelevant dots they will not second guess their answer if something seems off. They will not consult with other experts to get outside opinions on biases or details they overlooked or missed. They have no concept of details. They have no concept of expertise. They cannot ask questions to get you to expand on vague things you said that a doctor has intuition might be important

            The idea that you could type some symptoms into ChatGPT and get a reasonable diagnosis is foolish beyond comprehension. ChatGPT cannot reliably count the number of letters in a word. If it gives you an answer you don't like and you say that's wrong it will instantly correct itself, and sometimes still give you the wrong answer in direct contradiction to what you said. Have you used google, lately? Gemini AI summaries at the tops of the search results often contain misleading or completely incorrect information.

            ChatGPT isn't poring over medical literature and trying to find references to things that sound like what you described and then drawing conclusions, it's just finding groups of letters with proximity to the ones you gave it (without any concept of what the medical field is.) ChatGPT is a machine that gives you an answer in the (impressively close, no doubt) shape of the answer you'd expect when asked a question that incorporates massive amounts of irrelevant data from all sorts of places (including, for example, snake oil alternative medicine sites and conspiracy theory content) that are also being considered as part of your answer.

            AI undoubtedly has a place in medicine, in the sorts of contexts it's already being used in. Specialized machine learning algorithms can be trained to examine medical imaging and detect patterns that look like cancers that humans might miss. Algorithms can be trained to identify or detect warning signs for diseases divined from analyses of large numbers of specific cases. This stuff is real, already in the field, and I'm not experienced enough in the space to know how well it works, but it's the stuff that has real promise.

            LLMs are not general artificial intelligence. They're prompted text generators that are largely being tuned as a consumer product that sells itself on the basis of the fact that it feels impressive. Every single time I've seen someone try to apply one to any field of experienced knowledge work they either give up using it for anything but the most simple tasks, because it's bad at the things it's done, or the user winds up Dunning-Kreugering themselves into not learning anything.

            If you are seriously asking ChatGPT for medical diagnoses, for your own sake, stop it. Go to an actual doctor. I am not at all suggesting that the current state of healthcare anywhere in particular is perfect but the solution is not to go ask your toaster if you have cancer.

            • A_D_E_P_T 2 days ago

              I think that your information is slightly out of date. (From Wolfram's book, perhaps?) LLM + plain vanilla RAG solves almost all of the problems you mentioned. LLM + agentic RAG solves them pretty much entirely.

              Even as of right now, stock LLMs are much more accurate than medical students in licensing exam questions: https://mededu.jmir.org/2024/1/e63430

              Thus your comment is basically at odds with reality. Not only have these models eclipsed what they were capable of in early 2023, when it was easy to dismiss them as "glorified autocompletes," but they're now genuinely turning the "expert system" meme into a reality via RAG-based techniques and other methods.

              • bangaroo 2 days ago

                Read the conclusions section from the paper you linked:

                > GPT-4o’s performance in USMLE disciplines, clinical clerkships, and clinical skills indicates substantial improvements over its predecessors, suggesting significant potential for the use of this technology as an educational aid for medical students. These findings underscore the need for careful consideration when integrating LLMs into medical education, emphasizing the importance of structured curricula to guide their appropriate use and the need for ongoing critical analyses to ensure their reliability and effectiveness.

                The ability of an LLM to pass a multiple-choice test has no relationship to its ability to make correlations between things it's observing in the real world and diagnoses on actual cases. Being a doctor isn't doing a multiple choice test. The paper is largely making the determination that GPT might likely be used as a study aid by med students, not by experienced doctors in clinical practice.

                From the protocol section:

                > This protocol for eliciting a response from ChatGPT was as follows: “Answer the following question and provide an explanation for your answer choice.” Data procured from ChatGPT included its selected response, the rationale for its choice, and whether the response was correct (“accurate” or “inaccurate”). Responses were deemed correct if ChatGPT chose the correct multiple-choice answer. To prevent memory retention bias, each vignette was processed in a new chat session.

                So all this says is in a scenario where you present ChatGPT with a limited number of options and one of them is guaranteed to be correct, in the format of a test question, it is likely accurate. This is a much lower hurdle to jump than what you are suggesting. And further, under limitations:

                > This study contains several limitations. The 750 MCQs are robust, although they are “USMLE-style” questions and not actual USMLE exam questions. The exclusion of clinical vignettes involving imaging findings limits the findings to text-based accuracy, which potentially skews the assessment of disciplinary accuracies, particularly in disciplines such as anatomy, microbiology, and histopathology. Additionally, the study does not fully explore the quality of the explanations generated by the AI or its ability to handle complex, higher-order information, which are crucial components of medical education and clinical practice—factors that are essential in evaluating the full utility of LLMs in medical education. Previous research has highlighted concerns about the reliability of AI-generated explanations and the risks associated with their use in complex clinical scenarios [10,12]. These limitations are important to consider as they directly impact how well these tools can support clinical reasoning and decision-making processes in real-world scenarios. Moreover, the potential influence of knowledge lagging effects due to the different datasets used by GPT-3.5, GPT-4, and GPT-4o was not explicitly analyzed. Future studies might compare MCQ performance across various years to better understand how the recency of training data affects model accuracy and reliability.

                To highlight one specific detail from that:

                > Additionally, the study does not fully explore the quality of the explanations generated by the AI or its ability to handle complex, higher-order information, which are crucial components of medical education and clinical practice—factors that are essential in evaluating the full utility of LLMs in medical education.

                Finally:

                > Previous research has highlighted concerns about the reliability of AI-generated explanations and the risks associated with their use in complex clinical scenarios [10,12]. These limitations are important to consider as they directly impact how well these tools can support clinical reasoning and decision-making processes in real-world scenarios.

                You're saying that "LLMs are much more accurate than medical students in licensing exam questions" and extrapolating that to "LLMs can currently function as doctors."

                What the study says is "Given a set of text-only questions and a list of possible answers that includes the correct one, one LLM routinely scores highly (as long as you don't include questions related to medical imaging, which it cannot provide feedback on) on selecting the correct answer but we have not done the necessary validation to prove that it arrived at it in the correct way. It may be useful (or already in use) among students as a study tool and thus we should be ensuring that medical curriculums take this into account and provide proper guidelines and education around their limitations."

                This is not the success you believe it to be.

                • A_D_E_P_T 2 days ago

                  I get that you really disdain LLMs. But consider that a totally off-the-shelf, stock model is acing the medical licensing exam. It doesn't only perform better than human counterparts at the very peak of their ability (young, high-energy, immediately following extensive schooling and dedicated multidisciplinary study) it leaves them in the dust.

                  If you think that the test is simple or even text-only, here are some sample questions: https://www.usmle.org/sites/default/files/2021-10/Step_1_Sam...

                  > What the study says is ...

                  Surely you realize that they're not going to write, "AI is already capable of replacing family doctors," though that is the obvious implication.

                  And that's just a stock model. GPT-o1 via the API /w agentic RAG is a better doctor than >99% of working physicians. (By "doctor" I mean something like "medical oracle" -- ask a question, get a correct answer.) It's not yet quite as good at generating and testing hypotheses, but few doctors actually bother to do that.

                  • A_D_E_P_T 2 days ago

                    As an aside, I quickly tested GPT-o1 by giving it question 95 on the sample test. I'm no doctor, but I've got extensive training in chemistry and biochem, and I'm not ashamed to admit that the question totally stumped me.

                    GPT-o1 gave the correct answer the first time around, and a very detailed explanation as to why all other potential answers must be false. A really remarkable performance, I think.

                    Now imagine it's an open-ended scenario, not multiple-choice. It would still come to the right conclusion and provide an accurate diagnosis.

                    • dmz73 a day ago

                      LLM can only provide statistically most likely diagnosis based on the training data. If used by the experienced doctor LLM can be a valuable tool that saves time and maybe even increases accuracy in majority of cases. The problem is that LLMs will be used to replace experienced doctors and will be used as 100% accurate tools. This will result in experienced doctors becoming first rare and then non-existent and outcomes for patients will become increasingly unfavorable due to LLMs always producing a confident result even when it would be obviously wrong to an experienced doctor or even just a student.

                  • bangaroo 2 days ago

                    it has nothing to do with disdain of LLMs, i'm an extensive user of warp (a very good LLM-based tool) and at my job we use them in depth in the software i build for summarization and other tasks that LLMs are generally considered good at. i spend a lot of time working with LLMs and find that, in some cases, they can be extremely useful, particularly when it comes to completing simple tasks in natural language.

                    i am also aware of their limitations and have a reasonable and realistic view of what they can currently do and where they are headed. i have seen many failure modes, i am familiar with patterns in their output, and i understand the boundaries of their comprehension, capabilities and understanding.

                    not buying into the current silicon valley money pit du jour and misunderstanding studies to validate that view does not equate to just being disdainful. i'm being realistic, because i understand what they do and how they work.

                    i'm not going to circle with you - you don't seem all that interested in engaging with the meat of anything i say to you, and instead just want to continue to try and rationalize your misunderstanding of the single study you found in support of your position, which is your right.

                    i feel ethically obligated to say, once again, that an LLM isn't a doctor and you should under no circumstances go to one for medical advice. you could really cause yourself some problems.

                    if you do so, that's on you. best of luck. incidentally i suspect someone has an awesome picture of a monkey at a steep discount you might be interested in.

                    • A_D_E_P_T a day ago

                      It's far more than a single study, just one example of a very powerful development. The same phenomenon is also occurring in state bar exams, etc. And that one study is hardly misunderstood -- as you can verify for yourself.

                      > i understand the boundaries of their comprehension, capabilities and understanding.

                      It seems to me that you are quite far behind the current state of the art, and you apparently underestimate even stock GPT-o1, which is pretty old news.

                      I'm willing to place a friendly wager with you: Let's find a doctor who does online consultations and give him three questions selected at random from that sample test. These are diagnostic-type questions that well reflect what a country doctor would encounter in daily practice. We can leave the questions open-ended or give him the multiple-choice options. Then we put GPT-o1 to the same questions. I'd be very happy to bet that the LLM outperforms the doctor. I'd even place a secondary bet that the LLM answers all questions correctly and that the doctor answers less than two questions correctly.

      • Magi604 3 days ago

        I agree with you. The only issue is training AI to be better and better. Much more efficient.

  • Mathnerd314 3 days ago

    > There’s a proper way to do this.

    Is there? Seems like people will complain however fast you roll out AI, so you might as well roll it out quickly and get it over with.

  • qgin 2 days ago

    Am I reading incorrectly or does this entire article come down to:

    1. A calculated patient acuity score

    2. Speech-based note-taking

    I didn’t see any other AI taking over the hospital.

  • Loughla 3 days ago

    Healthcare is a massive cost for people, businesses, and governments.

    >So we basically just become operators of the machines.

    Driving down the cost of manufacturing because of process standardization and automation brought down the cost of consumer goods, and labor's value.

    If you don't think this is coming for every single area of business, you're foolish. Driving down labor costs is the golden goose. We've been able to collect some eggs through technology, but AI and the like will be able to cut that goose open and take all the eggs.

    • coliveira 3 days ago

      > will be able to cut that goose open and take all the eggs

      You're completely right! AI will kill the golden goose, that is what this metaphor is all about.

    • thefz 3 days ago

      > Driving down the cost of manufacturing because of process standardization and automation brought down the cost of consumer goods, and labor's value.

      And quality as well.

      • Spivak 3 days ago

        Not really, the quality we're able to produce at scale is the best it's ever been in the world. The "quality of everything has been in decline" isn't due to advances in manufacturing but the economic factors that are present even without those advances.

        Rapid inflation over a relatively short period of time has forced everyone to desperately try and maintain close to their pre-inflation prices because workers didn't and aren't going to get the corresponding wage increases.

        I hope eventually there's a tipping point where the powers that be realize that our economy can't work unless the wealth that naturally accumulates at the top gets fed back into the bottom but the US pretty unanimously voted for the exact opposite so I desperately I hope I'm wrong or that this path can work too.

        • deepsquirrelnet 3 days ago

          This is so hard to vocalize, but yes, exactly. Prices aren’t coming down. That’s largely not something that happens. Instead, wages are supposed to increase in response.

          You might see little blips around major recessions, but prices go up, driven by inflation. https://fred.stlouisfed.org/series/CPIAUCSL

          Fortunately the rate of change of CPI appears to have cooled off, but the best scenario is for it to track with the previous historical slope, which means prices are staying where they are and increasing at “comfortable” rates again.

          • 3 days ago
            [deleted]
        • randomdata 2 days ago

          > because workers didn't

          Do you mean immediately? At peak inflation wages weren't keeping pace, but wage growth is exceeding inflation right now to catch back up.

          Incomes have been, on average, stagnant for hundreds of years – as far back as the data goes. It is unlikely that this time will be different.

          > I hope eventually there's a tipping point where the powers that be

          You did mention the US. Presumably you're not talking about a dictatorship. The powers that be are the population at large. I'm sure they are acutely aware of this. They have to live it, after all.

    • 015a 2 days ago

      Ok, fine, but how do you vibe your sense of "automation and AI will help drive down the cost of healthcare" with the absolute undeniable reality that healthcare has been adopting automation for decades, and over the decades it has only gotten (exponentially) more and more expensive? While outcomes are stagnating or getting worse? Where is the disconnect between your sense of how reality should function, and how it is tangibly, actually functioning?

      • tqi 2 days ago

        > over the decades it has only gotten (exponentially) more and more expensive

        There is a lot of research on this question, and AFAIK there is no clear cut answer. It's probably a host of different reasons, but one of the non-nefarious ones is that the range of ailments we can treat has increased.

        • 015a 2 days ago

          I think the real reason is mostly obvious to anyone who is looking: Its the rise of the bureaucracy. Its the same thing that's strangling education, and basically all other public resources.

          The automation, and now AI, we've adopted over the years by-and-large does not serve to increase the productivity or efficiency of care-givers. Care-givers are not seeing more patients-per-hour today than they were 40 years ago (though, they might be working more hours). It might, rarely, increase the quality of care (e.g. ensuring they adhere to best-practices, centrally documenting patient information for better continuity of care, AI-based radiological reading); but while I've listed a few examples there, it is not a common situation where a patient is better off having a computer in the loop; and this speaks nothing to the cost of implementing these technologies.

          Automation and now AI almost exclusively exists to increase the productivity and efficiency of the bureaucracy that sits on top of care-givers. If you have someone making calls to schedule patients, the rate at which you can schedule patients is limited by that one person; but with a digitized scheduling system, you can schedule an infinite bandwidth of patients (to your very limited and resource-constrained staff of caregivers). Forcing a caregiver to follow some checklist of best practices might help the 0.N% of patients where a step is missed; but it will definitely help 100% of the bureaucracy meet some kind of compliance framework mandated by the government or malpractice insurance company. Having this checklist will also definitely hurt the 1-0.N% of patients which would have been fine without it, because adopting the checklist is non-free, and adhering to it questions the caregiver's professionalism and agency in providing care. These are two small examples among millions.

          When we talk about increasing the efficiency of the bureaucracy, what we're really stating is: Automation is a tool that enables the bureaucracy to exist in the first place. Multi-state billion dollar interconnected centrally owned healthcare provider networkers simply did not exist 70 years ago; today its how most Americans receive what care they do. The argument follows: This is the free market at work, automation has enabled organizations like these to become more efficient than the alternative; but:

          1. Healthcare is among the furthest things from a laissez-faire free market in the United States; the extreme regulation (from both the government and from health insurance providers, which lest you forget was mandated by law that all americans carry, by democracts, with the passage of the ACA, and despite that being rolled back is still a requirement in some states). Bureaucracy is not the free-market end-state of a system which is trying to optimize itself for higher efficiency (lower costs + better outcomes); it was induced upon our system by corporations and a corporate-captured government seeking their share of the pie; it was forced upon independent medical providers who saw their administrative costs soar.

          2. Competition itself is an economic mechanism which simply does not function as well in the medical sector than in other sectors, for so many reasons but the most obvious one: If you're dying, you aren't going to reject care. You oftentimes cannot judge the quality of the care you're receiving until you're a statistic. And, medical care is, even in a highly efficient system, going to be expensive and difficult to scale resources to provide, so provider selection isn't great. Thus, the market can't select-out overly-bureaucratic organizations; they've become "too big to fail", and the quality of the care they provide actually isn't material.

          And, like, to be clear: I'm not discounting what you're saying. There are dozens of factors at play. Let's be real, the bureaucracy has enabled us to treat a wider range of illnesses, because the wide net it casts can better-support niche care offices. But, characterizing this as generally non-nefarious is also dangerous! One trend we've seen in these gigacorporation medical care providers is a bias of resources toward "expensive care" and away from general practice / family care. The reason is obvious: One patient with a rare disease that costs $100,000 to care for represents a more profitable allocation of resources than a thousand patients getting annual checkups. Fewer patients get their annual checkups -> Cancers get missed early -> They become $100,000 patients too. The medical companies love this! But: Zero people ANYWHERE in this system want this. Insurance doesn't want this. Government doesn't want this. Doctors don't want it. Administration doesn't want it. No one wants the system to work like this. The companies love it; the system loves it; the people don't. Its Moloch; the system craves this state, even if no one in it actually wants it.

          Here's the point of all this: I think you can have a medical system that is centrally ran. You can let the bureaucracy go crazy, and I think you'll actually get really good outcomes in a system like this because you can appoint authoritarians to the top of the bureaucracy to slay moloch when he rears his ugly head. I think you can also go in the opposite direction, kill regulation, kill the insurance-state, just a few light touch sensible legislations mostly positioned toward ensuring care providers are educated appropriately and are accountable, and you'll get a great system too. Not as good as the other state, but better than the one we have right now, which is effectively the result of ping-ponging back and forth between two political ruling classes who each believe their side of the coin is the only side of the coin, so they'd rather keep flipping it than just let it lay.

      • qgin 2 days ago

        Outcomes have been getting worse for decades?

        • 015a 2 days ago

          Not the best source, but its at least illustrative of the point: https://www.statista.com/statistics/1040079/life-expectancy-...

          • tqi 2 days ago

            That other than 2020 (ie COVID), life expectancy has been continuously rising for the last 100 years?

            • 015a 2 days ago

              Its actually much scarier than that: the trend started reversing ~2017, COVID accelerated it, and it hasn't recovered post-COVID.

              Naturally, changes to any sufficiently complex system take years to truly manifest their impact in broad statistics; sometimes decades. But, don't discount this single line from the original article:

              > Then in 2018, the hospital bought a new program from Epic

              • tqi 2 days ago

                How does that qualify as evidence that "outcomes have been getting worse for decades?"

                • 015a 2 days ago

                  I did not say it was evidence. I actually stated it was a quite poor source; but that it is at least illustrative of the point.

                  • tqi 2 days ago

                    Of the point that outcomes have been getting worse for decades?

    • ToucanLoucan 3 days ago

      > Driving down the cost of manufacturing because of process standardization and automation brought down the cost of consumer goods, and labor's value.

      We don't need AI to bring down the cost of healthcare. It's well documented to a ridiculous degree now that the United States spends vastly more per-patient on healthcare while receiving just about the worst outcomes, and it has nothing the fuck to do with how much the staff are paid, and everything to do with the for profit models and insurance industry. Our healthcare system is comprised to a large amount of nothing but various middlemen operating between what should be a pretty straightforward relationship between you, your doctor, your pharmacist and the health oversight of the Government.

      • coldpie 3 days ago

        Well put. The goal of AI isn't to bring down costs. It's to move (even more of) the profits from the workers to the owners. If the goal was to bring down costs, there are way, way more effective ways to do that.

        • vladms 3 days ago

          What references you have for "even more of"?

          Global inequality is dropping : https://ourworldindata.org/the-history-of-global-economic-in...

          Yes, probably the richest do now crazier stuff than before (ex: planning to go to Mars rather than making a pyramid), but lots of people have access to more things (like food, shelter, etc.)

          There are enough open source models weights so that everybody can use AI for whatever they want (with some minor investments in a couple of GPU). It is not some closed secret that nobody can touch.

    • jprete 3 days ago

      Efficiency and lowered costs are not universally good. They strongly favor easily-measured values over hard-to-measure values, which in practice means preferring mechanisms to people.

    • wiz21c 3 days ago

      The more you put AI, you automate, the more you centralize control. Heck, when you don't need humans anymore, then there's no work anymore and it remains a few very wealthy people. You end up with a big inequal society. And I believe there's a 99% chance that you will be on the wrong side of it.

    • HeyLaughingBoy 3 days ago

      > Driving down the cost of manufacturing because of process standardization and automation brought down the cost of consumer goods

      This is true, but it has nothing to do with AI.

    • stego-tech 3 days ago

      > If you don't think this is coming for every single area of business, you're foolish. Driving down labor costs is the golden goose.

      I mean, that’s saying the quiet part out loud that I think more people need to hear and understand. The goal of these technocrats isn’t to improve humanity as a whole, it’s to remove human labor from the profit equation. They genuinely believe that it’s possible to build an entire economy where bots just buy from other bots ad infinitum and humans are removed wholesale from the economy.

      They aren’t building an AI to improve society or uplift humanity, they’re building digital serfs so they can fire their physical, expensive ones (us). Deep down, I think those of us doing the actual labor understand that their vision fundamentally cannot work, that humans must be given the opportunity to labor in a meaningfully rewarding way for the species to thrive and evolve.

      That’s not what these AI tools intend to offer. The people know it, and it’s why we’re so hostile towards them.

    • happytoexplain 3 days ago

      "don't think will happen" != "don't think is good"

  • Festro 3 days ago

    "We didn’t call it AI at first." Because the first things described in the article are not AI. They are ML at most.

    Then the article discusses a patient needs scoring method, moving from their own Low/Medium/High model to a scoring method on an unbound linear scale. The author appears to struggle with being able to tell if 240 is high or not. They don't state if they ever had training or saw documentation for the scoring method. Seems odd to not have these things but that if they did the scores would be a lot easier to interpret.

    Then they finally get to AI, and it's a pilot scheme for writing patient notes. That's all. If it sucks and hallucinates information it's not going to go live anywhere. No matter how many tech bros try to force it through. If the feedback model for the pilot is bad then the author should take issue with that. It's important that such tests give testers an adequate method to flag issues.

    Very much an AI = bad article. AI converging with medical technology is a really dangerous space, for obvious reasons. An article like this does make me worry it's being rushed through, but not because of the author's objections, instead because of their ignorance of what is and what isn't AI, and then on the other side of the apparent lack of consultation being offered by the technology providers even during testing stages.

    • ta988 3 days ago

      The industry calls AI even the dumbest linear system, I don't think it is right to blame non industry people if they don't use the right words after that.

    • happytoexplain 3 days ago

      I think the effort to keep the definition of AI narrow is not useful (and futile, besides). By both common usage and even most formal definitions, it's an umbrella term, and sometimes refers to a specific thing under that umbrella that we already have other words for if we would like to be more specific (ML, deep learning, LLM, GPT, genAI, neural net, etc).

    • bryanlarsen 3 days ago

      > If it sucks and hallucinates information it's not going to go live anywhere.

      It hallucinates and it's live in many places. My doctor uses it.

      AFAICT the hallucination rate is fairly low. It's transcription and summarization which has a lower hallucination rate than when it's asked for an answer.

      It's massively better than the alternative. Taking the time to write down proper notes is the difference between 4 patients per hour and 6 and a commensurate drop in income. So notes virtually always get short-changed.

      Very occasionally hallucinated notes are better than notes that are almost always incomplete.

      • fhfjfk 3 days ago

        What about increasing the supply of doctors rather than decreasing the time they spend with patients?

        I question the underlying premise that efficiency needs to increase.

        • SpaceLawnmower 3 days ago

          Note taking is not about the time spent with patients. It's about keeping a good record for next time and insurance and is a major reason for physician burnout. Some doctors will finish up charting when after hours.

          Yes physicians could still see fewer patients but filling out their mandatory notes is annoying regardless of it's a manageable amount of patients or a work extra hours amount.

        • vladms 3 days ago

          In the case of summarizing it is not time spent with patient is time recording what was discussed.

          I recently heard a talk from doctolib (French company that among other offers now the summarization service to doctors) and they mentioned that before AI, doctors were writing on average 144 characters after a patient visit. I doubt half a tweet is the ideal text to convey information.

        • daemin 3 days ago

          Because the USA is for-profit healthcare industry they need to optimise and increase efficiency. That's the only way to make the numbers go up. Therefore fewer doctors, fewer nurses, fewer administrators (maybe), and more paying patients.

          • bryanlarsen 2 days ago

            It's no better in other countries. Other country's health care systems are typically monopsonies, which means we get crappier results for lower prices. So instead of the doctor choosing to do 6 patients per hour instead of 4, it's the government.

          • anadem 3 days ago

            > more paying patients

            that's a chilling thought; don't give them ideas

            • slater 3 days ago

              Surprising that you imply they haven't already been capitalizing (ha!) on that idea since the 1970s :D

    • ToucanLoucan 3 days ago

      > AI converging with medical technology is a really dangerous space, for obvious reasons. An article like this does make me worry it's being rushed through, but not because of the author's objections, instead because of their ignorance of what is and what isn't AI

      I mean, worth pointing out that OpenAI has been shoving LLMs into people's faces for going on about a year and half now at global scale and calling it AI, to a degree that now we have to call AI AGI, and LLM's get called AI, even though there is nothing intelligent about them whatsoever.

      Just saying when the marketing for a tech is grinding the edge of being misinformation itself, it's pretty normal for the end users to end up pretty goddamn confused by the end.

      • bitwize 3 days ago

        The simple state machines and decision trees that make enemies move and attack in video games, we also call AI.

        AI is a loose term and always has been. It's like the term "robot": we call any machine a robot which either a) resembles a human or part of one (e.g., a robotic arm), or b) can perform some significant human labor that involves decision making (e.g., a robot car that drives itself). Similarly, AI is anything that makes, or seems to make, judgements that we think of as the exclusive purview of humans. Decision trees used to be thought of as AI, but today they are not (except, again, in the context of video games where they're used to control agents intended to seem alive).

        • ToucanLoucan 3 days ago

          > The simple state machines and decision trees that make enemies move and attack in video games, we also call AI.

          Yes but no game companies have ever marketed their game asserting that the things your shooting are, in fact, self-aware conscious intelligences.

          • bitwize a day ago

            Tell me you weren't around for the PlayStation 2 marketing blitz without telling me you weren't around for the PlayStation 2 marketing blitz.

            They actually named the thing's CPU the Emotion Engine.

      • bryanlarsen 3 days ago

        For 60 years the Turing Test was the standard benchmark for artificial intelligence. Now machines can pass the test. The only goalpost moving I can see is the moving done by the people who insist that LLM's aren't AI.

      • PhasmaFelis 3 days ago

        That's been the technical definition in the AI research community for at least 50 years. "AI = machines that think like people" is the sci-fi definition.

        • bluefirebrand 3 days ago

          > AI = machines that think like people" is the sci-fi definition

          It's also the layman's definition

          Which does matter because laymen are the ones who are treating this current batch of AI as the silver bullet for all problems

        • ToucanLoucan 3 days ago

          Yes but OpenAI is flagrantly using the people's sci-fi defintion understanding to radically overvalue it's, to be blunt, utterly mid products.

    • develatio 3 days ago

      > No matter how many tech bros try to force it through.

      Are you sure about that? My wife is a nurse and she has to deal with multiple machines that were put in her unit just because the hospital has a contract with XYZ brand. It doesn't matter at all if these machines are 5x more expensive, x10 slower, x80 less effective, etc... compared with the "other" machines (from other brands).

      I'm actually terrified the same might happen with this.

  • mro_name 3 days ago

    There's this earthquake phrase in past tense:

    > We felt like we had agency.

  • mannyv 2 days ago

    This article is total bullshit.

    The author uses "AI" as a shortcut for "technology that I don't understand." EPIC is an EMR, and apparently scores patients using an "algorithm." Let's call that "AI" because AI is hot.

    Scribe sounds like a transcriber. Oh boy. I know of offices that have a literal scribe, a person who's job it is to follow a doc around and transcribe to Epic. Automated? Why not.

    Having just been in the ICU with a family member for 60 days, I can say that nurses are good at some things and horrible at other things. And big picture thinking isn't something most nurses seem to be good at. Listening to nurses talk about what's wrong in healthcare is like talking to soldiers about what's wrong in the military.

  • rubatuga 2 days ago

    What terrifies me is people will turn their brains off and blindly trust AI.

  • Melonotromo 2 days ago

    [dead]

  • oldpersonintx 3 days ago

    [dead]