As AI capabilities advance in complex medical scenarios that doctors face on a daily basis, the technology remains controversial in medical communities.
I mean if the AI takes women seriously then that’s honestly already better than most of the doctors I’ve had
Unfortunately, if the data is biased, the model is biased.
Yes, that’s something that’s constantly emphasized in scientific research. You might have the most infallible algorithm, but… garbage in, garbage out. You’ll still get garbage data if what you enter into the algorithm is garbage
I was about to say…
Wonder what the success rate of doctors is. I’d be surprised if it is above 70% lol
Or if it’s better at diagnosing minorities, too.
It almost certainly won’t, but it’s nice to hope.
It might remove the face to face human bias of a GP but it doesn’t make up for the decades of preconceived or absent research about women or minorities.
To allow ChatGPT or comparable AI models to be deployed in hospitals, Succi said that more benchmark research and regulatory guidance is needed, and diagnostic success rates need to rise to between 80% and 90%.
Sucks if your one of the 10-20% who don’t get proper treatment (maybe die?) because some doctor doesn’t have time to double check. But hey … efficiency!
Ya that’s a fundamental misunderstanding of percentages. For an analogous situation with which we’re all more intuitively familiar, a self driving car that is 99.9% accurate in detecting obstacles crashes into one in one thousand people and/or things. That sucks.
Also, most importantly, LLMs are incapable of collaboration, something very important in any complex human endeavor but difficult to measures, and therefore undervalued by our inane, metrics-driven business culture. Chatgpt won’t develop meaningful, mutually beneficial relationships with its colleagues, who can ask each other for their thoughts when they don’t understand something. It’ll just spout bullshit when it’s wrong, not because it doesn’t know, but because it has no concept of knowing at all.
It really needs to be pinned to the top of every single discussion around chatgbt:
It does not give answers because it knows. It gives answers because it thinks it looks right.
Remember back in school when you didn’t study for a test and went through picking answers that “looked right” because you vaguely remember hearing the words in Answer B during class at some point?
It will never have wisdom and intuition from experience, and that’s critically important for doctors.
Removed by mod
“Looks right” in a human context means the one that matches a person’s actual experience and intuition. “Looks right” in an LLM context means the series of words have been seen together often in the training data (as I understand it, anyway - I am not an expert).
Doctors are most certainly not choosing treatment based on what words they’ve seen together.
Removed by mod
Or one of the ninty nine percent of people who don’t give the AI their symptoms in medical terminology.
deleted by creator
So it’s about as good as people going to WebMD and diagnosing themselves? hoo boy
AI could ultimately improve both the efficiency and the accuracy of diagnosis as healthcare in the U.S. gets more expensive and complicated as individuals live longer, and the overall population ages
When you read some bs on internet but thankfully its US only
First in medical spending. 40th or 50th in positive medical outcomes.
Among Western nations.
Details matter here. Here’s a few from the study:
ChatGPT achieved an overall accuracy of 71.7% (95% CI 69.3%-74.1%) across all 36 clinical vignettes. The LLM demonstrated the highest performance in making a final diagnosis with an accuracy of 76.9% (95% CI 67.8%-86.1%) and the lowest performance in generating an initial differential diagnosis with an accuracy of 60.3% (95% CI 54.2%-66.6%). Compared to answering questions about general medical knowledge, ChatGPT demonstrated inferior performance on differential diagnosis (β=–15.8%; P<.001) and clinical management (β=–7.4%; P=.02) question types.
At the time of the study, 36 vignette modules were available on the web, and 34 of the 36 were available on the web as of ChatGPT’s September 2021 training data cutoff date. All 36 modules passed the eligibility criteria of having a primarily textual basis and were included in the ChatGPT model assessment.
All questions requesting the clinician to analyze images were excluded from our study, as ChatGPT is a text-based AI without the ability to interpret visual information.
Those odds are shit. Meanwhile most of us have zero doctor.
Medicine is going to take awhile for anything except small-scope tools to handle one specific thing, due to the massive variation in the presentation of different problems that doctors can face.
What won’t take as long, because there isn’t the same inherent variation in presentation, is law.
Sadly, that percentage is probably better than a good number of doctors.
About 70 percentage points better than my doctor. Nice!
Yeah, medicine is one of the areas where I really feel like AI could make serious strides. Most people don’t have a doctor they see regularly anyway so any input would be welcome. Anecdotally I’ve known several people who were misdiagnosed or just had doctors not believe them.
Of course I’d want to be able to escalate and have different treatment options but I could probably be ok with AI-assisted medicine.
That’s a whole 22% better than a coin toss!!!
Not really comparable since a medical diagnosis has hundreds of possible results
Having made many models that were only slightly better than a coin toss, that’s really not bad. Especially since that’s not even their primary design goal.
I don’t use ChatGPT and ain’t planning to, but someone should try asking it something like…
“How often should a male change their tampon?”
See what, if any nonsense it regurgitates.
“Men do not typically use tampons since they are designed for menstruation, which is a female biological process. If you have specific questions about personal hygiene or healthcare, it’s best to consult with a medical professional who can provide guidance based on your individual needs and circumstances.”
Okay then, well go figure. I was guessing it would puke up some nonsense, but apparently not.
Men that menstruate exist.
Now I’m no expert, but if you’re bleeding from your penis or your anus, you should probably go see a doctor about that.
I’m no expert
It shows.
I used the biological word ‘male’, not the opinionated word ‘man’. IDGAF what you identitfy as.
This isn’t up for debate, either you have XX or XY chromosomes. You were either born with a dick or you weren’t.
So unless you’re a hermaphrodite, you should probably get yourself checked if you are somehow bleeding from your penis.
Or not, I don’t care if your dick falls off.
This isn’t up for debate, either you have XX or XY chromosomes. You were either born with a dick or you weren’t.
There are so many biological exceptions to this that it’s not even funny. For example, it is very possible to have XY chromosomes yet not be born with a dick. Look up Androgen Insensitivity Syndrome for a start.
There are no absolutes when talking about sex and gender. None.
What? No documentation to back this up?
I mean seriously, are there any documented cases of anyone born with a penis yet also having monthly periods?
I can see already you’re not arguing in good faith, you’re just trying to raise a stink. Have a mediocre day.