As AI capabilities advance in complex medical scenarios that doctors face on a daily basis, the technology remains controversial in medical communities.

  • CyberCatBytes@kbin.social
    link
    fedilink
    arrow-up
    37
    arrow-down
    5
    ·
    1 year ago

    I mean if the AI takes women seriously then that’s honestly already better than most of the doctors I’ve had

      • Contramuffin@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 year ago

        Yes, that’s something that’s constantly emphasized in scientific research. You might have the most infallible algorithm, but… garbage in, garbage out. You’ll still get garbage data if what you enter into the algorithm is garbage

    • Domille@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      I was about to say…

      Wonder what the success rate of doctors is. I’d be surprised if it is above 70% lol

      • cooopsspace@infosec.pub
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        It almost certainly won’t, but it’s nice to hope.

        It might remove the face to face human bias of a GP but it doesn’t make up for the decades of preconceived or absent research about women or minorities.

  • Qu4ndo@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    4
    ·
    edit-2
    1 year ago

    To allow ChatGPT or comparable AI models to be deployed in hospitals, Succi said that more benchmark research and regulatory guidance is needed, and diagnostic success rates need to rise to between 80% and 90%.

    Sucks if your one of the 10-20% who don’t get proper treatment (maybe die?) because some doctor doesn’t have time to double check. But hey … efficiency!

    • theluddite@lemmy.ml
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      4
      ·
      1 year ago

      Ya that’s a fundamental misunderstanding of percentages. For an analogous situation with which we’re all more intuitively familiar, a self driving car that is 99.9% accurate in detecting obstacles crashes into one in one thousand people and/or things. That sucks.

      Also, most importantly, LLMs are incapable of collaboration, something very important in any complex human endeavor but difficult to measures, and therefore undervalued by our inane, metrics-driven business culture. Chatgpt won’t develop meaningful, mutually beneficial relationships with its colleagues, who can ask each other for their thoughts when they don’t understand something. It’ll just spout bullshit when it’s wrong, not because it doesn’t know, but because it has no concept of knowing at all.

      • deweydecibel@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        edit-2
        1 year ago

        It really needs to be pinned to the top of every single discussion around chatgbt:

        It does not give answers because it knows. It gives answers because it thinks it looks right.

        Remember back in school when you didn’t study for a test and went through picking answers that “looked right” because you vaguely remember hearing the words in Answer B during class at some point?

        It will never have wisdom and intuition from experience, and that’s critically important for doctors.

          • ourob@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            “Looks right” in a human context means the one that matches a person’s actual experience and intuition. “Looks right” in an LLM context means the series of words have been seen together often in the training data (as I understand it, anyway - I am not an expert).

            Doctors are most certainly not choosing treatment based on what words they’ve seen together.

    • treefrog@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      Or one of the ninty nine percent of people who don’t give the AI their symptoms in medical terminology.

  • weew@lemmy.ca
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    So it’s about as good as people going to WebMD and diagnosing themselves? hoo boy

  • Aopen@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    1 year ago

    AI could ultimately improve both the efficiency and the accuracy of diagnosis as healthcare in the U.S. gets more expensive and complicated as individuals live longer, and the overall population ages

    When you read some bs on internet but thankfully its US only

    • treefrog@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      First in medical spending. 40th or 50th in positive medical outcomes.

      Among Western nations.

  • geosoco@kbin.social
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    Details matter here. Here’s a few from the study:

    ChatGPT achieved an overall accuracy of 71.7% (95% CI 69.3%-74.1%) across all 36 clinical vignettes. The LLM demonstrated the highest performance in making a final diagnosis with an accuracy of 76.9% (95% CI 67.8%-86.1%) and the lowest performance in generating an initial differential diagnosis with an accuracy of 60.3% (95% CI 54.2%-66.6%). Compared to answering questions about general medical knowledge, ChatGPT demonstrated inferior performance on differential diagnosis (β=–15.8%; P<.001) and clinical management (β=–7.4%; P=.02) question types.

    At the time of the study, 36 vignette modules were available on the web, and 34 of the 36 were available on the web as of ChatGPT’s September 2021 training data cutoff date. All 36 modules passed the eligibility criteria of having a primarily textual basis and were included in the ChatGPT model assessment.

    All questions requesting the clinician to analyze images were excluded from our study, as ChatGPT is a text-based AI without the ability to interpret visual information.

  • Candelestine@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    Medicine is going to take awhile for anything except small-scope tools to handle one specific thing, due to the massive variation in the presentation of different problems that doctors can face.

    What won’t take as long, because there isn’t the same inherent variation in presentation, is law.

    • norbert@kbin.social
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      Yeah, medicine is one of the areas where I really feel like AI could make serious strides. Most people don’t have a doctor they see regularly anyway so any input would be welcome. Anecdotally I’ve known several people who were misdiagnosed or just had doctors not believe them.

      Of course I’d want to be able to escalate and have different treatment options but I could probably be ok with AI-assisted medicine.

    • AlataOrange@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Having made many models that were only slightly better than a coin toss, that’s really not bad. Especially since that’s not even their primary design goal.

  • over_clox@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    7
    ·
    1 year ago

    I don’t use ChatGPT and ain’t planning to, but someone should try asking it something like…

    “How often should a male change their tampon?”

    See what, if any nonsense it regurgitates.

    • b_crussin@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      “Men do not typically use tampons since they are designed for menstruation, which is a female biological process. If you have specific questions about personal hygiene or healthcare, it’s best to consult with a medical professional who can provide guidance based on your individual needs and circumstances.”

      • over_clox@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Okay then, well go figure. I was guessing it would puke up some nonsense, but apparently not.

      • over_clox@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        1 year ago

        Now I’m no expert, but if you’re bleeding from your penis or your anus, you should probably go see a doctor about that.

          • over_clox@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            edit-2
            1 year ago

            I used the biological word ‘male’, not the opinionated word ‘man’. IDGAF what you identitfy as.

            This isn’t up for debate, either you have XX or XY chromosomes. You were either born with a dick or you weren’t.

            So unless you’re a hermaphrodite, you should probably get yourself checked if you are somehow bleeding from your penis.

            Or not, I don’t care if your dick falls off.

            • stopthatgirl7@kbin.socialOP
              link
              fedilink
              arrow-up
              4
              arrow-down
              1
              ·
              1 year ago

              This isn’t up for debate, either you have XX or XY chromosomes. You were either born with a dick or you weren’t.

              There are so many biological exceptions to this that it’s not even funny. For example, it is very possible to have XY chromosomes yet not be born with a dick. Look up Androgen Insensitivity Syndrome for a start.

              There are no absolutes when talking about sex and gender. None.

              • over_clox@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                3
                ·
                1 year ago

                What? No documentation to back this up?

                I mean seriously, are there any documented cases of anyone born with a penis yet also having monthly periods?

                • stopthatgirl7@kbin.socialOP
                  link
                  fedilink
                  arrow-up
                  4
                  arrow-down
                  2
                  ·
                  edit-2
                  1 year ago

                  I can see already you’re not arguing in good faith, you’re just trying to raise a stink. Have a mediocre day.