• ✺roguetrick✺@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    ·
    6 days ago

    What are you going to train it off of since basic algorithms aren’t sufficient? Past committee decisions? If that’s the case you’re hard coding whatever human bias you’re supposedly trying to eliminate. A useless exercise.

    • Giooschi@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      6 days ago

      A slightly better metric to train it on would be chances of survival/years of life saved thanks to the transplant. However those also suffer from human bias due to the past decisions that influenced who got a transpant and thus what data we were able to gather.

      • ✺roguetrick✺@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        6 days ago

        And we do that with basic algorithms informed by research. But then the score gets tied and we have to decide who has the greatest chance of following though on their regimen based on things like past history and means to aquire the medication/go to the appointments/follow a diet/not drink. An AI model will optimize that based on wild demographic data that is correlative without being causative and end up just being a black box racist in a way that a committee that has to clarify it’s thinking to other members couldn’t, you watch.

  • Fades@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    6 days ago

    The death panels Republican fascists claim Democrats were doing are now here, and it’s being done by Republicans.

    I hate this planet

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    46
    arrow-down
    1
    ·
    7 days ago

    Yeah, I’d much rather have random humans I don’t know anything about making those “moral” decisions.

    If you’re already answered, “No,” you may skip to the end.

    So the purpose of this article is to convince people of a particular answer, not to actually evaluate the arguments pro and con.

  • kemsat@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    6 days ago

    Yeah. It’s much more cozy when a human being is the one that tells you you don’t get to live anymore.

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    6
    ·
    7 days ago

    That’s not what the article is about. I think putting some more objectivety into the decisions you listed for example benefits the majority. Human factors will lean toward minority factions consisting of people of wealth, power, similar race, how “nice” they might be or how many vocal advocates they might have. This paper just states that current AIs aren’t very good at what we would call moral judgment.

    It seems like algorithms would be the most objective way to do this, but I could see AI contributing by maybe looking for more complicated outcome trends. Ie. Hey, it looks like people with this gene mutation with chronically uncontrolled hypertension tend to live less than 5years after cardiac transplant - consider weighing your existing algorithm by 0.5%

    • MsPenguinette@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      7 days ago

      Tho those complicated outcome trends can have issues with things like minorities having worse health outcomes due to a history of oppression and poorer access to Healthcare. Will definitely need humans overseeing it cause health data can be misleading looking purely at numbers

      • Imgonnatrythis@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        6
        ·
        7 days ago

        I wouldn’t say definitely. AI is subject to bias of course as well based on training, but humans are very much so, and inconsistently so too. If you are putting in a liver in a patient that has poorer access to healthcare they are less likely to have as many life years as someone that has better access. If that corellates with race is this the junction where you want to make a symbolic gesture about equality by using that liver in a situation where it is likely to fail? Some people would say yes. I’d argue that those efforts towards improved equality are better spent further upstream. Gets complicated quickly - if you want it to be objective and scientifically successful, I think the less human bias the better.

    • phdepressed@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      7 days ago

      Creatinin in urine was used as a measure of kidney function for literal decades despite African Americans having lower levels despite worse kidneys by other factors. Creatinine level is/was a primary determinant of transplant eligibility. Only a few years ago some hospitals have started to use inulin which is a more race and gender neutral measurement of kidney function.

      No algorithm matters if the input isn’t comprehensive enough and cost effective biological testing is not.

    • sunzu2@thebrainbin.org
      link
      fedilink
      arrow-up
      11
      arrow-down
      3
      ·
      7 days ago

      I agree with you but also

      It seems like algorithms would be the most objective way to do this

      Algo is just another tool corpos and owners use to abuse. They are not independent, they represent interest of their owners and they oppress pedon class.

      • CherryBullets@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        Yep, basically. How it’s gonna go: instead of basing the transplant triage on morals, priority and the respect of human life as being priceless and equal, the AI will base it on your occupation within society, age, sex and how much money you make for the rich overlords if you recover. Fuck that noise.

        • sunzu2@thebrainbin.org
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          6 days ago

          That’s kinda how it already works we just need to optimize it even more to ensure that only the best people get the organs

          • CherryBullets@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 days ago

            That is not how it “basically works” where I live; doctors don’t care about what I do for a living or how much money I have, they just treat me like everyone else. The triage is by priority (as in emergency and compatibility of the organ). If they used AI, it wouldn’t be for the choice itself, but for keeping track of the waiting list. The AI itself choosing based on criteria like age, sex, race, work or culture would be unethical.

    • StructuredPair@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      7 days ago

      Everyone likes to think that AI is objective, but it is not. It is biased by its training which includes a lot of human bias.

  • SabinStargem@lemmings.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    4
    ·
    6 days ago

    I don’t mind AI. It is simply a reflection of whoever is in charge of it. Unfortunately, we have monsters who direct humans and AI alike to commit atrocities.

    We need to get rid of the demons, else humanity as a whole will continue to suffer.

    • Grass@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      6 days ago

      everything republicans complained about can be done under Trump twice as bad, twice as evil and they will be ‘happy’ and sing his praises

  • Steve Dice@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    6 days ago

    Hasn’t it been demonstrated that AI is better than doctors at medical diagnostics and we don’t use it only because hospitals would have to take the blame if AI fucks up but they can just fire a doctor that fucks up?

    • cynar@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 days ago

      I believe a good doctor, properly focused, will outperform an AI. AI are also still prone to hallucinations, which is extremely bad in medicine. Where they win is against a tired, overworked doctor with too much on his plate.

      Where it is useful is as a supplement. An AI can put a lot of seemingly innocuous information together to spot more unusual problems. Rarer conditions can be missed, particularly if they share symptoms with more common problems. An AI that can flag possibilities for the doctor to investigate would be extremely useful.

      An AI diagnostic system is a tool for doctors to use, not a replacement.