• 5 Posts
  • 333 Comments
Joined 1 year ago
cake
Cake day: January 13th, 2024

help-circle

  • Part of my significant suspicion regarding AI is that most of my medical experience and my intended specialty upon graduation is Emergency Medicine. The only thing AI might be useful for there is to function as a scribe. The AI is not going to tell me that the patient who denies any alcohol consumption smells like a liquor store, or that the patient that is completely unconscious has asterixis and flapping tremors. AI cannot tell me anything useful for my most critical patients, and for the less critical ones, I am perfectly capable of pulling up UpToDate or Dynamed and finding the thing I’m looking for myself. Maybe it can be useful for making suggestions for next steps, but for the initial evaluation? Nah. I don’t trust a glorified text predictor to catch the things that will kill my patients in the next 5 minutes.


  • It’s entirely possible that it just wasn’t diagnosed until very recently. Prostate cancer screening is not a standard recommendation at his age, and there are a lot of cancers that are very insidious. A lot of times, if there wasn’t a screening test done for it, cancer is caught because of the symptoms of metastasis meaning that unless we’re screening for cancer, we don’t catch it until it’s already progressed.

    Some people are more attuned to their bodies and might notice the smaller, earlier symptoms, but for prostate cancer, they can be pretty easy to miss and the primary metastasis symptom is usually back pain from the cancer spreading into the lumbar vertebrae. A lot of people will just write that off as regular back pain and not go to the doctor for it.


  • My mistake, I recalled incorrectly. It got 83% wrong. https://arstechnica.com/science/2024/01/dont-use-chatgpt-to-diagnose-your-kids-illness-study-finds-83-error-rate/

    The chat interface is stupid in so many ways and I would hate using text to talk to a patient myself. There are so many non-verbal aspects of communication that are hard to teach to humans that would be impossible to teach to an AI. If you are familiar with people and know how to work with them, you can pick up on things like intonation and body language that can indicate that they didn’t actually understand the question and you need to rephrase it to get the information you need, or that there’s something the patient is uncomfortable about saying/asking. Or indications that they might be lying about things like sexual activity or substance use. And that’s not even getting into the part where AI’s can’t do a physical exam which may reveal things that the interview did not. This also ignores patients that can’t tell you what’s wrong because they are babies or they have an altered mental status or are unconscious. There are so many situations where an LLM is just completely fucking useless in the diagnostic process, and even more when you start talking about treatments that aren’t pills.

    Also, the exams are only one part of your evaluation to get through medical training. As a medical student and as a resident, your performance and interactions are constantly evaluated and examined to ensure that you are actually competent as a physician before you’re allowed to see patients without a supervising attending physician. For example, there was a student at my school that had almost perfect grades and passed the first board exam easily, but once he was in the room with real patients and interacting with the other medical staff, it became blatantly apparent that he had no business being in the medical field at all. He said and did things that were wildly inappropriate and was summarily expelled. If becoming a doctor was just a matter of passing the boards, he would have gotten through and likely would have been an actual danger to patients. Medicine is as much an art as it is a science, and the only way to test the art portion of it is through supervised practice until they are able to operate independently.


  • In order to tell it what is important, you would have to read the material to begin with. Also, the tests we took in class were in preparation for the board exams which can ask you about literally anything in medicine that you are expected to know. The amount of information involved here and the amount of details in the text that are important basically necessitate reading the text yourself and knowing how the information in that text relates to everything else you’ve read and learned.

    Trying to get the LLM to spit out an actually useful summary would be more time-consuming than just doing the reading to begin with.


  • This attitude is why people complain about doctors having God complexes and why doctors frequently fall victim to pseudoscientific claims. You think you know far more about how the world works than you actually do, and it’s my contention that that is a result of the way med students are taught in med school.

    I’m not claiming to know all of these things. I’m not pretending that I do, but there is still an expectation that I know what kinds of health problems my patients are at risk for based on their lifestyle. I’m better off in this area than a lot of my classmates because I didn’t go straight from kindergarten through medical school. My undergraduate degree is in history and I worked in tech for a while before going back to school. My hobbies are all over the place, including having done blacksmithing with my Dad when I was a kid. I have significantly more life experience than most of my classmates, so I have a leg up on being familiar with these things.

    I know that there is a lot that I don’t know which is why my approach to medicine is that I will be studying and learning until the day I retire. I have a pretty good idea of where my limits are and when to call a specialist for things I’m not sure about. I make a point to learn as much as I can from everyone, patients, other physicians, my friends, random folks on the street/internet…everyone.

    For example, I know from watching a dumb youtube channel about some of the weird chemicals that someone who worked as an armorer in the Army would have been exposed to that can have some serious health effects, but that wasn’t something that was explicitly covered in my formal medical school education. I have friends in the Navy and they’re the ones that told me about the weird fertility effects of working on the flight deck of an aircraft carrier. The Naval medical academy did a study on it, but I would have never had the inclination to go read that study if I hadn’t heard about it from my friends. The list goes on. There’s so many things that are important for me to know that will never be covered in our lectures in school and wouldn’t even come up as things to learn about if I didn’t learn about them from other people.


  • Medical malpractice is very rarely due to gaps in knowledge and is much more likely due to accidents, miscommunication, or negligence. The board exams are not taken at the school and have very stringent anti-cheating measures. The exams are done at testing centers where they have the palm vein scanners, identity verification, and constant video surveillance throughout the test. If there is any irregularity during your exam, it will get flagged and if you are found to have cheated, you are banned from ever taking the exam again. (which also prevents you from becoming a physician)



  • I am expected to know and understand all of the risk factors that someone may encounter in their engineering or manufacturing or cooking or whatever line of work, and to know about people’s social lives, recreational activities, dietary habits, substance usage, and hobbies can affect their health. In order to practice medicine effectively, I need to know almost everything about how humans work and what they get up to in the world outside the exam room.



  • The AI passed the multiple choice board exam, but the specialty board exam that you are required to pass to practice independently includes oral boards, and when given the prep materials for the pediatric boards, the AI got 80% wrong, and 60% of its diagnoses weren’t even in the correct organ system.

    The AI doing pattern recognition works on things like reading mammograms to detect breast cancer, but AI doesn’t know how to interview a patient to find out the history in the first place. AI (or, more accurately, LLMs) doesn’t know how to do the critical thinking it takes to know what questions to ask in the first place to determine which labs and imaging studies to order that it would be able to make sense of. Unless you want the world where every patient gets the literal million dollar workup for every complaint, entrusting diagnosis to these idiot machines is worse than useless.


  • I disagree. I am a medical student and there is a lot of critical thinking that goes into it. Humans don’t have error codes and there are a lot of symptoms that are common across many different diagnoses. The critical thinking comes in when you have to talk to the patient to get a history and a list of all the symptoms and complaints, then knowing what to look for on physical exam, and then what labs to order to parse out what the problem is.

    You can have a patient tell you that they have a stomachache when what is actually going on is a heart attack. Or they come in complaining of one thing in particular, but that other little annoying thing they didn’t think was worth mentioning is actually the key to figuring out the diagnosis.

    And then there’s treatment…Nurse Practitioners are “educated” on a purely algorithmic approach to medicine which means that if you have a patient with comorbidities or contraindications to a certain treatment that aren’t covered on the flow chart, the NP has no goddamn clue what to do with it. A clear example is selecting antibiotics for infections. That is a very complex process that involves memorization, critical thinking, and the ability to research things yourself.


  • Medical school has to have a higher standard and any amount of cheating will get you expelled from most medical schools. Some of my classmates tried to use Chat GPT to summarize things to study faster, and it just meant that they got things wrong because they firmly believed the hallucinations and bullshit. There’s a reason you have to take the MCAT to be eligible to apply for medical school, 2 board exams to graduate medical school, and a 3rd board exam after your first year of residency. And there’s also board exams at the end of residency for your specialty.

    The exams will weed out the cheaters eventually, and usually before they get to the point of seeing patients unsupervised, but if they cheat in the classes graded on a curve, they’re stealing a seat from someone who might have earned it fairly. In the weed-out class example you gave, if there were 3 cheaters in the top half, that means students 51, 52, and 53 are wrongly denied the chance to progress.




  • A bunch of the “citations” ChatGPT uses are outright hallucinations. Unless you independently verify every word of the output, it cannot be trusted for anything even remotely important. I’m a medical student and some of my classmates use ChatGPT to summarize things and it spits out confabulations that are objectively and provably wrong.





  • I have the suspicion that you aren’t really familiar with what rural poverty looks like. These are people that cannot qualify for credit cards, get taken advantage of by payday loans, and struggle to meet the basic necessities that don’t actually even add up to a reasonable standard of living. These are people that can’t afford to put enough gas in the tank of their car to drive to those other stores, and literally their only source of groceries is likely to be a Walmart if they’re lucky enough to have one in their town.

    There is a good amount of Schadenfreude to be had when it comes to Trump voters, but when you’re in the position of trying to help them control their diabetes and high blood pressure on a diet of cheap, processed, high-sugar, high-sodium crap, you’ll lose that spiteful glee real quick. These are people that are inextricably trapped by poverty, food deserts, healthcare deserts, and failing education systems that never really had a chance and it’s hard for me to find any real satisfaction in seeing them suffer.