44 comments

  • ipnon 5 hours ago

    Similarly the main calculator used in the US to calculate 10-year risk of cardiovascular incident literally cannot compute scores for people under 40.[0] There are two consequences to this. The first is that if you are under 40 you will never encounter a physician who believes you are at risk of heart attack or stroke, even though over 100,000 Americans under 40 will experience such an incident each year. The second is that even if you get a heart attack or stroke due to their negligence they will never be liable because that calculator is considered the standard of care in malpractice law!

    Governing bodies write these guidelines that act like programs, and your local doctor is the interpreter.[1] When was the last time you found a bug that could be attributed to the interpreter rather than the programmer?

    [0] https://tools.acc.org/ascvd-risk-estimator-plus/#!/calculate...

    [1] It’s worth considering what medical schools, emergency rooms, and malpractice lawyers are analogous to in this metaphor.

    • lazyasciiart 4 hours ago

      I had a heart attack at 35, despite not really having other risks. A sibling who had a heart attack is the biggest risk factor, but later my sister did not qualify for a study on heart attack risk because she was only 39.

      My ER notes literally say “can’t be a heart attack but that’s what it looks like, so we’ll treat it as one for now”, which is a little unnerving.

      • rscho 2 hours ago

        > is a little unnerving

        Why so? You were lucky! You had a low probability for the diagnosis, but the doc made the right decision. That's to be celebrated.

        > did not qualify for a study on heart attack risk because she was only 39.

        Criteria for studies are designed to test a specific hypothesis. There are many possible reasons why your sister was not eligible, and not all of them bad.

        • yapyap an hour ago

          > Why so? You were lucky! You had a low probability for the diagnosis, but the doc made the right decision. That's to be celebrated

          cause they still said “can’t”

          • rscho 33 minutes ago

            If this doc meant it in the literal sense of 'can't', why go through with the workup, then? This is IMO evidence that the doc meant 'very unlikely, but let's check'. I agree words are important, but still the right decision was made and that's cool.

    • rscho 5 hours ago

      > if you are under 40 you will never encounter a physician who believes you are at risk of heart attack or stroke

      This is absolutely not true. Only someone knowing nothing about healthcare could come to such a conclusion.

      > guidelines that act like programs, and your local doctor is the interpreter.

      Such reframing is irrational. You are reframing scientific facts into an almost completely empirical context. It doesn't work like that at all.

      • yieldcrv 4 hours ago

        Then the entire medical industry is failing at communicating that

        The relatability of OP’s shared experience has us wanting to replace most medical professionals with genAI language models as soon as the regulations allow

        • mannykannot 18 minutes ago

          > Then the entire medical industry is failing at communicating that.

          Rebutting hypobolic extrapolations from literally one datum is still not something that the entire medical industry - or just my PCP and cardiologist, for that matter - should be prioritizing (unless they have a patient doing just that), even if the prevalence of such claims has increased over the last decade or so.

          The afforementioned professionals had no reticence in taking my pre-40 symptoms of heart disease seriously, even though I did not present any of the correlates frequently associated with it.

        • rscho 4 hours ago

          > replace most medical professionals with genAI language models as soon as the regulations allow

          Understandable, I guess. But not feasible now, nor in the foreseeable future. The problem is not even "AI" performance. The real problem is that the useful data isn't available to machines, because it's mostly acquired through meeting patients in person. It's gonna take lots of money to make machines that can compensate for that.

          • yieldcrv 4 hours ago

            Multimodal language models have already been good at accepting imaging input and noticing things that professionals overlook

            I don't see how a meeting patient in person requirement is an issue. They can listen to the patient, have a context window large enough to analyze their medical history and environmental factors, look at charts, and diagnostics of tissues

            and still have a much greater EQ, ability to affirm, and have empathy more than the dismissive high IQ doctor ever will

            humans are going to chose that because smart humans don't have those attributes

            • rscho 3 hours ago

              It is said that "90% of diagnosis is made on patient history". That's the whole problem for machines. We'll need machines able to converse and integrate patient appearance, behaviour etc. as well as humans, and reliably derive the appropriate conclusions from that before we get efficient medical AI. We'll see how fast progress can be made, but from what I see from chatGPT and the like, I seriously doubt the current AI wave will achieve acceptable results in real, everyday medicine. IMO, procedural medicine where lots of multimodal info is always available and the environment relatively fixed, such as (simple) surgery, is a better candidate for (reliable) automation in the near term. Something like prosthetic orthopedics, maybe ?

              • mquander 3 hours ago

                Isn't it dramatically easier to provide more useful history to machines?

                If I'm providing history to a doctor I am pretty much trying to jam the history into a two minute explanation, and they are trying to remember our previous interactions based on short summarized notes that they made without my help.

                If I'm providing history to a machine I can take my time to tell the machine as much as I want every time. I can send it whole spreadsheets of symptom logging and tell it my whole life story.

                • rscho 3 hours ago

                  Maybe for you, but not for most people. Because most people do not behave the way you are describing. Most people express themselves in vague, sometimes incomprehensible ways linked to their cultural and personal background. Their priorities might not be aligned with their best interests at all. Some will even think it clever to hide info from the doc, because they are prejudiced against docs or fear being reported, etc. That's why a skilled clinician is first of all a skilled interrogator, and second an accurate observer. The way you look, behave, walk and talk is very often of more value than lab tests. That's what a good GP is actually: someone good at extracting information from people. An unfortunate consequence of that is that every doc you'll meet will want to hear your story again, which gets old fast for patients.

                  • yieldcrv 2 hours ago

                    but almost nobody has a skilled clinician or a good GP

                    or a skilled/good one at that point in time because their clinician is hungry, or has random bias against that person’s communication style, or insurer

                    or, in the US, you changed jobs and your insurer changed and you need a new doctor in an applicable network

                    I’m amused how all of your explanations and rebuttals reinforce the path to irrelevancy

                    • rscho an hour ago

                      > I’m amused how all of your explanations and rebuttals reinforce the path to irrelevancy

                      As I said, one day certainly. But if you think current tech is up to par, then I'm sorry but you are being delusional. Also, you assume I'm trying to defend the statu quo. That's not the case. I'm all for progress.

              • JumpCrisscross 2 hours ago

                > We'll need machines able to converse and integrate patient appearance, behaviour etc. as well as humans, and reliably derive the appropriate conclusions from that before we get efficient medical AI

                This presupposes the problem of medical records having been solved.

                • rscho 2 hours ago

                  No, this presupposes that the machine won't interact solely with the medical record, but mostly directly with the patient. At least, that's my understanding. In this view, medical records won't be just text records anymore, but records of the whole system 'sensorium' for lack of a better term.

    • hombre_fatal 5 hours ago

      Out of curiosity, how is a physician negligent if decades of exposure to hypertension/LDL/smoking/diabetes (the variables on that calculator) give you a heart attack or stroke?

      By the time you're put on a statin, for example, you've already had decades of exposure due to your lifestyle.

      Also, I don't believe the claim that physicians don't care about CVD risk in patients <40yo including high blood pressure and high cholesterol.

      • zamadatix 4 hours ago

        Flip the issue to something less polarizing and it should appear this is a very separate scenario from what GP is talking about (even if perhaps you still don't agree it should be malpractice for some reason):

        1) You go in after feeling confused and have a headache after falling from a skateboard with no helmet. The ER sends you home not having checked anything or any notes to watch out for because they think you're too young to have problems from a fall (despite many young people having problems after a fall each year). At home you die because of a brain bleed.

        vs.

        2) You go in after feeling confused and have a headache after falling from a skateboard with no helmet. The ER runs some tests, sees the problem, and prescribes the best course of treatment given this information. Despite this you still die or have lasting effects on your brain.

        Despite the doctors not fully remedying your problem in both situations only situation 1 involves negligence for a malpractice claim because the problem isn't the outcome, it's the quality of treatment not meeting the minimum levels. Flip the scenario specifics back and what GP is saying is that it isn't considered negligence to say "you're under 40, you're fine, go home" instead of "you could seriously be having a problem. We should put you on a statin and talk over the risks/symptoms of a heart attack" because the standard of care (sort of one measurement for what's a negligent treatment action) says the calculator defines the appropriate treatment and the calculator doesn't even work for those <40. What GP is not implying is doctors are negligent just because you still had a heart attack anyways.

        • adastra22 3 hours ago

          Any ER would check for a concussion in that circumstance, as I can attest from experience.

          • zamadatix an hour ago

            Almost certainly. That's why not doing so is used as a clear example of malpractice and negligence - the standard of care says to check for those kinds of issues given the situation and that's what nearly every doctor will therefore do.

      • zamadatix an hour ago

        (separating this out)

        I agree with you heavily here: "Also, I don't believe the claim that physicians don't care about CVD risk in patients <40yo including high blood pressure and high cholesterol."

        Seems odd over all. My physician, unprompted, wanted to put me on a statin when I was very healthy and in my early 30s just to lower my risk as my cholesterol numbers were trending up at the time. Whether or not this calculator actually works for those under 40, physicians certainly still prescribe statins, evaluate heart health risks, and communicate on the dangers of poor heart health to individuals all the time anyways.

      • hgomersall 3 hours ago

        Almost all ailments can be mitigated to some extent by lifestyle choices. Is anyone that doesn't make the best possible choices for the particular ailment responsible for their situation?

        • hombre_fatal 3 hours ago

          The question in this context is whether they are less responsible for their lifestyle choices than their physician.

          We're talking about the variables in the calculator: blood pressure, cholesterol, smoking, and diabetes.

          Which one of those is the physician more responsible for than the patient?

    • hansvm 5 hours ago

      What happens if the doctor says the tool is likely wrong and gives a reasonable (according to their peers) reason why? Does the court blindly accept some algorithm over hard-earned experience?

      • rscho 3 hours ago

        No, typically a court would summon an expert on the topic for testimony. Such an expert, as most any doc, would understand the limits of guidelines/calculators/etc. and judge accordingly. A typical clinical presentation resulting in a missed diagnosis would not fly at all under this process. But an atypical presentation in a very low probability context (young patient, no risk factors) might get through. Also, contrary to popular belief docs absolutely do not cover for each other in court.

    • thrw42A8N 5 hours ago

      > When was the last time you found a bug that could be attributed to the interpreter rather than the programmer?

      On the other hand, when was the last time you used a custom one-off interpreter?

  • kreyenborgi 5 days ago

        > The choice of a 5-year period seems to be because of data availability
    
    Also known as "looking for the keys under the lamp-post" https://en.wikipedia.org/wiki/Streetlight_effect (which links to https://en.wikipedia.org/wiki/McNamara_fallacy which I hadn't heard of before, but which seems to fit very well here too).

        > An algorithmic absurdity: cancer improves survival
        > [...] 
        > algorithmic absurdity, something that would 
        > seem obviously wrong to a person based on common sense.
    
    A useful term!

    > optimize “quality-adjusted” life years

    https://repaer.earth/ was posted on HN recently as an extreme example of this hehe

  • steveBK123 5 hours ago

    I think I've worked in software/data long enough to be very very suspicious of a one-size-fits-all algorithm like this. I would be very hesitant to entrust something like organ matching to a singular matching system.

    There are so many ways to get it wrong - bad data, bad algo design/requirements, mistakes in implementation, people understanding the system too well being able to game it, etc.

    Human systems have biases, but at least there are diverse biases when there are many decision makers. If you put something important behind a single algorithm, you are locking in a fixed bias inadvertently.

    • icegreentea2 5 hours ago

      What does a non "one size fits all" approach for organ matching look like? What does a non-singular matching system work? Do you arbitrarily (randomly?) split up organs into different pools and let each pool match by a different algorithm?

      • toast0 5 hours ago

        So, from the article, it sounds like this current UK system for liver transplant matchibg was developed to replace the previous regional systems. It's not clear if all of those used the same process to determine matches, but it would be possible for them to have developed different processes.

        It's also likely that a cross-regional system existed, that may have been ad-hoc. If you had a patient with an exceptional need, you might ask the other regions to be on the look out for an exceptional liver that works just right for your patient. That sort of thing is harder to do in a national system where livers are allocated based on scores.

        Another thing that's helpful with multiple systems is it encourages reviewing and comparing results.

        For a single system, reviewing results is even more important, but comparing is harder. But you might look at things like demographics of patients who died from liver disease while on the list including how long they were on the list; how long the current people have been waiting; demographics of people who recieve a transplant and how long they waited.

        If there's a bias against young people, you would likely see more young people with long wait times, etc.

      • steveBK123 5 hours ago

        Yes, in the US it might look like state level / hospital system level vs 1 singular national level matching system.

        US has its problems, but sometimes the "laboratory of ideas" that is federated system of 50 states prevents bad outcomes like this.

        • RyanHamilton 5 hours ago

          The lab of ideas = advantages the rich e.g. Steve Jobs. "In 2009, Steve Jobs received a liver transplant—not in northern California where he lived, but across the country in Memphis, Tennessee. Given the general complications of both travel and a transplant, Jobs’ decision may seem like an odd choice. But it was a strategic move that almost certainly got him a liver much more quickly than if Jobs had just waited for a liver to become available in California." https://arstechnica.com/science/2017/03/live-death-math-and-...

        • icegreentea2 5 hours ago

          The challenge is maintaining the multiple independent systems when faced with pressures like "hey, if we consolidated systems, the the % of waiting list patients who die within 6 months of enrolling goes from 8% to 4%, and the % who receive a transplant go from 60% to 65%".

          The UK system undoubtedly had a bad outcome, but the reasoning behind consolidation was sound, and the benefits real and ACTUALLY achieved (just not dispersed justly). Maintaining independent systems would mitigate against some of these failures, but would long-term be out performed by a responsive consolidated system (which I think is ultimately what the article is arguing for - not against algorithms, but against black-box algorithms that are not responsive or amendable to public scrutiny and feedback).

          There are definitely times and places with independent implementations provide a strong benefits, but I think this is a much more borderline scenario.

          And btw, the US has a unified organ matching system.

  • icegreentea2 5 hours ago

    I think the generalized take away from this article, and the position held by the authors is: "Overall, we are not necessarily against this shift to utilitarian logic, but we think it should only be adopted if it is the result of a democratic process, not just because it’s more convenient." and "Public input about specific systems, such as the one we’ve discussed, is not a replacement for broad societal consensus on the underlying moral frameworks.".

    I wonder how exactly this would work. As the article identifies, health care in particular is continuously barraged with questions of how to allocate limited resources. I think the article is right to say that the public was probably in the dark to the specifics of this algorithm, and that the transition to utilitarian based decision making frameworks (ie algorithms) was probably -not- arrived by at by a democratic process.

    But I think had you run a democratic process on the principle of using utilitarian logic in health care decision making, you would end up with consensus to go ahead. And then this returns us to this specific algorithmic failure. What is the scaleable process to retaining democratic oversight to these algorithms? How far down do we push? ER rooms have triage procedures. Are these in scope? If so, what do the authors imagine the oversight and control process to look like.

    • loeg 5 hours ago

      Hm, I think the bigger issue presented is that the algorithm in question is heavily biased against younger patients -- it deviates significantly from an ideal utilitarian model.

      • icegreentea2 4 hours ago

        Right, so there was a flawed implementation. Even if you had democratic consent to "implement a utilitarian organ matching mode", that would not prevent this failure mode.

        So what is the governance and oversight framework for ensuring democratic consent from ideation to implementation to monitoring, and how does it differ from what the UK did? The article points out that there were multiple reviews of the algorithm that identified this bias all the way back in 2019. What is the process that connects that feedback with the democratic process to ensure that flawed implementations never deploy, or are adjusted quickly.

  • jwilk 2 hours ago

    The Financial Times article discussed on HN:

    https://news.ycombinator.com/item?id=38202885 (22 comments)

  • Havoc 4 hours ago

    I'd very much hope it is biased towards them if anything.

  • binary132 3 hours ago

    So, if I as a 38-year-old had a mild liver impairment which could reduce my life expectancy to 60 (22 years from now) I should get priority over a 60-year-old with a debilitating, excruciating condition which will end his life in six months, merely because his life expectancy with the transplant may only be 70?

    That’s an outrageous and obscene utility calculation to propose and it should be obviously so to just about anyone.

    • JumpCrisscross 3 hours ago

      > if I as a 38-year-old had a mild liver impairment which could reduce my life expectancy to 60 (22 years from now) I should get priority over a 60-year-old with a debilitating, excruciating condition which will end his life in six months, merely because his life expectancy with the transplant may only be 70

      No. Because it's mild and could reduce your life expectancy. Once it becomes worse and a will, yes--you should.

    • mananaysiempre 2 hours ago

      Triage and similar practices are, as a rule, outrageous and obscene. Doesn’t necessarily make them wrong, just something most choose to be ignorant of.

  • gyudin 3 hours ago

    What can go wrong when you let government agencies with no expertise to develop and maintain AI models and algorithms, right?

    And then we get articles saying that AIs are biased, racist and don’t work as expected and that AI in general as a technology has no future.

    I can even predict what will be their solution lmao, to pay atrocious lump of money to big consulting agencies with no expertise to develop it for them and fail again.

    • cedws 3 hours ago

      The fairest way to do it I feel is a FIFO. Yeah you might give an organ to a 70 year old on their last legs, but they don’t have any less of a right to live than anyone else. After the Horizon scandal, public trust in complicated computer systems are at an all time low. It shouldn’t be an opaque system making such important decisions. Everything should be in the open and explainable.