Lemmy, I really would like to hear your opinions on this. I am bipolar. after almost a decade of being misdiagnosed and on medication that made my manic symptoms worse, I found stable employment with good insurance and have been able to find a good psychiatrist. I’ve been consistently medicated for the past 3 years, and this is the most stable I have been in my entire life.

The office has rolled out the use of an app called MYIO app. My knee jerk reaction was to not be happy about the app, but I managed my emotions, took a breath and vowed to give it a chance. After being sent the link to validate my account, the app would force restart my phone at the last step of activation. (I have my phone locked down pretty tight, and lots of google shit, and data sharing is disabled, so I’m thinking that might be the cause. My phone is also like 4-5 years old, so that could also be the cause.)

Luckily I was able to complete the steps on PC and activate that way. Once I was in the account there were standard forms to sign, like the HIPAA release. There was also a form there requesting I consent to the use of AI. Hell to the NO. That’s a no for me dawg.jpg.

I’m really emotional and not thinking rationally. I am hoping for the opinions of cooler heads.

If my doctor refuses to let me be a patient if I don’t consent to AI, what should I do? What would you do? Agree even though this is a major line in the sand for me, or consent to keep a provider I have a rapport with, who knows me well enough to know when my meds need adjusting?

EDIT: This is the text of the AI agreement. As part of their ongoing commitment to provide the best possible service, your provider has opted to use an artificial intelligence note-taking tool that assists in generating clinical documentation based on your sessions. This allows for more time and focus to be spent on our interactions instead of taking time to jot down notes or trying to remember all the important details. A temporary recording and transcript or summary of the conversation may be created and used to generate the clinical note for that session. Your provider then reviews the content of that note to ensure its accuracy and completeness. After the note has been created, the recording and transcript are automatically deleted.

This artificial intelligence tool prioritizes the privacy and confidentiality of your personal health information. Your session information is strictly used for the purpose of your ongoing medical care. Your information is subject to strict data privacy regulations and is always secured and encrypted. Stringent business associate agreements ensure data privacy and HIPAA compliance.

Edit 2: I just wanted to say that I appreciate everyone here that commented. For the most part everyone brought up valid points, and helped me see things I had not considered. I emailed my doctor and let them know I did not want to agree to the use of AI. I let them know that I was cool with transcription software being used as long as it was installed locally on their machines, but I did not want a third party online app having access to recorded sessions for the purposes of transcription. They didn’t take issue with it.

Thank you everyone!

  • Luc@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    4 天前

    I let them know that I was cool with transcription software being used as long as it was installed locally on their machines, but I did not want a third party online app having access to recorded sessions for the purposes of transcription. They didn’t take issue with it.

    A cynical part in me thinks they’ll just have it “locally installed” in the same way that Firefox is locally installed (doesn’t mean the meaningful part runs locally), and that no third party has access because the servers just don’t show stuff from other tenants even though the server operator could theoretically see all. It’s not like the medical people necessarily know better if their vendor answered the concerns in this manner

    One way to find out for lay people might be to turn off WiFi, or disconnect the network cable, and see if it still works — in case you’re in a position where the doc might seem willing to do such a 30-second experiment (if they haven’t already tried this in the past themselves). Doesn’t mean it doesn’t get uploaded when internet is reconnected (e.g. for backups), but that is much harder to check, and if the vendor already made sure the processing is all local then it’s probably okay and not being sold off as training or insurance data

    Kudos for reading the terms of service and raising your concerns with them! So long as some of us keep doing that, the privacy of people who don’t know about this sort of thing is also better-protected. Thank you :)

  • GreenBeanMachine@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    7 天前
    1. If your options are having a doctor that uses AI or having no doctor at all. Some doctor is better than none.

    2. I would ask more information about what AI they are using, where the data is processed (locally or online), where and how the AI collected data is stored (locally or in the cloud), who can access your data and whether it could be used for some AI training.

  • Crankley@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    7 天前

    I have all sorts of anxiety surrounding AI. Most of the anxiety comes from the misuse, copyright issues and departure from critical and creative thinking. However, one of the fields I actually think it could be very useful and of great benefit is medicine.

    That being said, I’d be a no as well. The way this is worded and he track record we’ve seen with privacy doesn’t fill me with much confidence. Feels like another instance off loading of thinking rather as a tool for better diagnosis.

    It sounds like America from the process. The confluence of commercialization of healthcare and tools that can make it look like time and attention has been used leads to some bad places. I’d be very sceptical about any advice medical or otherwise I recieved.

    The unfortunate truth is that without these tools the cost of care will be higher for health companies not using the tools. Which means bespoke human led care will be a luxury in America in the near future. I don’t think it’s a reality you are going to be able to avoid.

    I would push back at every opportunity, double check all of the information you are getting, ask pointed “why this” questions, make doctors clearly communicate that they are the ones giving the recommendation. At the end of the day a good doctor with AI tools is likely to do a better job.

  • NotMyOldRedditName@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    7 天前

    For note taking only, id be fine IF it was all run locally with no ability to be trained on.

    Id want assurances from the Dr that they also carefully review the notes immediately after or that I get to see the notes before leaving due to the risk of hallucinations that could cause future care problems.

    They could have it visible on a screen while youre in the room with you to help you be sure its accurate.

    Edit: id care less about it being local if it wasn’t medical/legal in nature.

  • Tollana1234567@lemmy.today
    link
    fedilink
    arrow-up
    3
    ·
    7 天前

    i wonder if they hallucinate notes post-appointment, i notice that there have been complaints against certain providers that the “doctors” did other examinations that they dint do in-person and it appeared on their records.

  • Yes I would but only if I can be sure the LLM wasn’t listening. Collection of personal information requires consent in Canada and I wouldn’t be giving consent.

    I don’t believe for one second the conversations aren’t uploaded to a datacenter nor that those transcripts will be deleted.

  • sudoer777@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 天前

    Imagine charging $300 then outsourcing your note taking to a machine that barely knows shit and has nothing to lose

  • slazer2au@lemmy.world
    link
    fedilink
    English
    arrow-up
    92
    arrow-down
    5
    ·
    9 天前

    I would nope the fuck out and change doctors. A regurgitation machine prone to hallucinations has no place in medical care.

    • oneser@lemmy.zip
      link
      fedilink
      arrow-up
      14
      arrow-down
      18
      ·
      9 天前

      If this was for a GP, I would agree with this stance. But a good, fitting and competent mental health professional can be harder to find.

      • applebusch@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        10
        ·
        8 天前

        That’s the last fucking profession who should be using LLMs… People can gaslight themselves with chatbots without paying for a trusted professional to reinforce that bullshit.

        • oneser@lemmy.zip
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          8 天前

          OP didn’t state this clearly, but I went and looked. The app is not for replacing consults, only billing etc. so I’d put it in the “annoying, but not world ending” category.

      • Zos_Kia@jlai.lu
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        8 天前

        By god they’re going to make OP change doctors just because they hate “le stochastic parrot”. And op is probably in the US which makes the whole thing even crueller.

        Literally a horde of teenagers playing with a bipolar’s head because they have big feelings about stuff.

        And all this for a fucking note taking app Jesus Christ. Yeah sure OP is probably risking their mental health in the process but who gives a shit about that when you have an occasion to proclaim that le AI bad.

        • WhyJiffie@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 天前

          you seem to have no clue about the problem at hand. It’s the lesser of issues that the AI transcriber could hallucinate. the worse problem, which is irreversible, that the treatment session and every private detail that gets discussed is funneled to at best questionable companies who will do whatever they want with your private information. once that happened, you can’t just make them delete what they stored in the process, it is completely unveriable what they do besides offering the original service. everything that was told in the session will not stay between the two of you.
          accepting this unknowingly is very dangerous. accepting it knowingly will alter what you say and the results with it, like going to a therapist who you know personally, which is not allowed for very good reasons.

          • Zos_Kia@jlai.lu
            link
            fedilink
            arrow-up
            3
            arrow-down
            2
            ·
            8 天前

            You think therapists and doctors in general don’t use Docs or Notes services that are hosted or backed up in the cloud ? You think having your medical data leaked to tech companies is new ? Just because the notes transcription app is AI doesn’t make it magically worse. In fact it makes the data harder to access as you need to re-infer the whole enchilada if you want to mine it (as opposed to, say, Google Drive who can just make a SQL query on your data and get it structured and ready to use).

            It’s nice that mental health is so inconsequential to you that you can balance it against privacy purity politics. It’s really cool for you that you’re in this position of privilege. It’s not cool to be pushing on someone with a clinical condition in a way that will probably get them worse off, in a country with absolutely no mental health safety net. Just like antivax it’s coated in fake concern, but you’re playing a dangerous game with someone else’s life and you’re cool with it because you’re insulated from the consequences.

            You guys really are a pure product of those amoral hyper-individualistic times.

            • WhyJiffie@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 天前

              It’s nice that mental health is so inconsequential to you that you can balance it against privacy purity politics.

              oh now I’m a privacy purist! oh god what have I become! I want totally unreasonable things!!

              or, it seems you by default don’t care about privacy at all because surely who needs it, and also already forgot the case of woman in USA using online period tracker apps that outed them for having an illegal abortion.

              Just like antivax it’s coated in fake concern,

              fake concern, sure… my concerns are very real, and OP has come for advice, asking among others what could be the consequences. well, this is one of the consequences there will be.

              You guys really are a pure product of those amoral hyper-individualistic times.

              yes, blame me, not the system that made this situation. don’t you want to call the cops on me?

        • Washedupcynic@lemmy.caOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 天前

          I am concerned about what is done with the data generated via the saved recording and transcription. Yes I live in the USA. Our government is currently kidnapping people off the street and disappearing them for being brown. They are attempting to build databases identifying trans people. So yeah, I’m concerned that the third party my doctor is using, MYIO, could sell the data/transcripts, and before I know it I end up on a government list and disappeared because I am gay. Could the theft of this data being generated by the app lead to identity theft? MYIO says the videos aren’t stored long term, and everything is encrypted; but companies like and the monetary penalties are just rolled into the cost of doing business. This isn’t a note taking app, there are already plenty of transcribers on the market. This is something entirely different.

          I’ve already had my identity stolen and credit cards opened in my name.

          And no one is going to “MAKE” me change doctors. That’s something I decide for myself.

      • phoenixarise@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        4
        ·
        edit-2
        8 天前

        I don’t believe that. They just don’t want to pay them what they’re worth. Machines don’t ask for days off or health insurance, that’s their rationale. I hope they go out of business.

    • originalucifer@moist.catsweat.com
      link
      fedilink
      arrow-up
      8
      arrow-down
      51
      ·
      9 天前

      you do know at some point the whole ‘hallucinations’ line is going to be as fresh as calling things ‘woke’, right?

      the ‘does this thing have ai in it’ is already a fucking blur as businesses link to each other via private and public APIs… healthcare is no different.

      these things are already in place in many places. if youre a part of any nation wide health services, youre already impacted.

      its like the fact that a huge % of our GDP is tied to like 10 companies… you cannot live your life in the modern united states without suffering products or services from those 10 companies, full stop. your life with ai will look the same.

      can you work hard avoid shit and cry about it? yep. yep you can… but thats about it.

      • OwOarchist@pawb.social
        link
        fedilink
        English
        arrow-up
        43
        arrow-down
        2
        ·
        9 天前

        you do know at some point the whole ‘hallucinations’ line is going to be as fresh as calling things ‘woke’, right?

        The truth doesn’t care whether it’s “fresh” or not.

        As long as AI still hallucinates, it will be useful for entertainment purposes only and never for anything as serious as healthcare.

        your life with ai will look the same.

        lol, tell that to every other business fad that has come and gone.

        The AI bubble will pop, the economy will crash, and in the long run, that will be a good thing.

        • THE_GR8_MIKE@lemmy.world
          link
          fedilink
          English
          arrow-up
          22
          arrow-down
          1
          ·
          9 天前

          Dude must be some MBA crypto bro AI slop jock. His grammar isn’t good enough to be one of those idiot CEOs who just learned what artificial intelligence is. Maybe he’s a shareholder for one of those soul-less companies. Probably not that either though. Perhaps he’s just a terrible artist or programmer who uses AI slop for all of his works of shart. The possibilities really are endless these days.

          • originalucifer@moist.catsweat.com
            link
            fedilink
            arrow-up
            6
            arrow-down
            10
            ·
            9 天前

            im an ex corp drone whose value was replacing humans with automation.

            it sucks, its already exists, it will happen more. llms are already in these pipelines and theres nothing any of us can do to avoid it.

            im not saying its good. im not saying it should be. im saying, it exists right now cuz ive been a part of it.

                • phoenixarise@lemmy.world
                  link
                  fedilink
                  arrow-up
                  5
                  arrow-down
                  1
                  ·
                  8 天前

                  Oh okay, so your only value is the pursuit of material bullshit and not the well being of human beings. Good luck getting AI to pay for your shitty wares when nobody makes money to afford them. 🤭

                  I have no idea what it’s like to be you, and I’m glad I don’t. Enjoy your cold empty heart! 🙂

      • Janx@piefed.social
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        9 天前

        It’s almost like the very businesses that creamed their pants about being able to replace workers and endless “blue ocean” profits exaggerated, lied, and forced AI into every. single. product. That’s not consumers’ faults…

        • originalucifer@moist.catsweat.com
          link
          fedilink
          arrow-up
          3
          arrow-down
          6
          ·
          9 天前

          i cant understand why people are oblivious to the multi-faced war-front that is AI.

          theres the shit you hear about and see every day (oh look copilot shit the bed! claude cant add! teehehee look at all the extra fingers!) and then theres the shit that is actually being implemented in process models all over the place in nearly every department. from inventory to healthcare analysis to customer service, this shit is in daily use now … and you cannot avoid it.

          ai is just an api call away and software companies suck.

      • kescusay@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        8 天前

        Ummm, hallucinations are literally how LLMs work. Everything they generate is confabulation, though sometimes it’s useful confabulation.

        • timbuck2themoon@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 天前

          I think we should stop using their terms.

          Llms spout BULLSHIT half the time. They don’t hallucinate. They confidently state incorrect garbage.

      • slazer2au@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 天前

        you cannot live your life in the modern united states without suffering products or services from those 10 companies

        Well, its good that I don’t live there.

  • soar160@lemmy.world
    link
    fedilink
    English
    arrow-up
    65
    arrow-down
    1
    ·
    9 天前

    Definitely ask for how they are using it. I know a number of physicians that are just using it as a dictation software to quickly make a first draft for their paperwork, helps lighten a big load.

    • credo@lemmy.world
      link
      fedilink
      arrow-up
      26
      arrow-down
      1
      ·
      8 天前

      This is the answer.

      Most docs can’t keep up with the mountain of paperwork or billing codes required by insurance companies these days. The software helps, but requires the doc to review and sign off the notes.

      It’s not an LLM coming up with treatment plans, etc. It’s transcription+

    • ace_garp@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      8 天前

      Dictation and summary software could be installed onto the doctor’s computer.

      There is something else going on here, with pushing an app onto patients.

    • WesternInfidels@feddit.online
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 天前

      I had a visit with a PA who pantomimed the use of an inhaler she didn’t actually have on hand. The note-taking robot decided that was a “demonstration” with a billing code, and that it should be billed as $800.

  • TwilitSky@lemmy.world
    link
    fedilink
    arrow-up
    46
    arrow-down
    1
    ·
    edit-2
    9 天前

    OP. I’m a bit of an unfortunate expert in U.S. Healthcare.

    The fact that you have a psychiatrist who you trust that has you on the right meds and have been with for 3 years is invaluable. You calling yourself stable is a huge thing. You wouldn’t be saying that if you weren’t on solid ground.

    It would be completely crazy to give up a psychiatrist who is on your insurance over some AI garbage that is just transcribing notes for your doctor.

    At bare minimum get a new psychiatrist who is on your insurance before switching. That should take about 6 months if you’re very lucky.

    Play it through: do you want to lose a quality prescriber and talk therapist? Also, maybe you should just tell them you’re extremely concerned and see what they say or do.

    The end result can’t be worse than you giving up on your mental health. You already know how hard it is to find quality psych care.

    • Washedupcynic@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      21
      ·
      8 天前

      I 100% agree with you. I trust my doctor. I don’t trust the app. Prior to this we were using zoom.

      • ace_garp@lemmy.world
        link
        fedilink
        arrow-up
        9
        arrow-down
        1
        ·
        8 天前

        A video-conferencing call is generally one-to-one with the clinician you know and have a relationship with.

        An AI app on your phone opens your data to being viewed and scrutinised by a 3rd party within the medical practice or outside. (Which may be a positive, adding other insights that a single person may miss) Unless this is agreed, it would be a breach of patient trust. It seems the agreement you click gives your permission to share your data anywhere that ‘furthers treatment’.

        It seems like massive over reach to install it on your phone, instead of on the doctor’s computer(where it could still summarise all interaction).

        I would say you are right to want to move away from this kind of imposition. If do you change doctor, make sure to indicate that you will not install any apps as part of your treatment.

        At the very least I would install the app under a seperate user than my main account.

      • CultLeader4Hire@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        8 天前

        As a person who has strong bipolar tendencies but not over the threshold for a diagnosis even I struggle with these sorts of things and often find myself asking “is this thought not just self sabotage at the end of the day?” I’m also physically disabled and go to a lot of doctors appointments, who now use AI for notes and I don’t like it either but to allow that to stand in the way of my care would absolutely be self sabotage. If my doctors started outsourcing other aspects of their jobs to AI I would seriously have a problem and would reconsider my position but note taking is incredibly time consuming for doctors and if using software that transcribes our conversations allows them to be better at their actual job of being a doctor that’s a compromise I can make, especially when I remind myself bipolar symptoms often get in the way of a persons willingness to compromise

        • TwilitSky@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          8 天前

          That’s the biggest concern.

          People who need life saving mental Healthcare are already engaging on a brave journey admitting they need help and it’d be a shame if AI crap got in the way of that.

          People just don’t think.

      • WhyJiffie@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 天前

        honest question. was it no problem that zoom was being used for the sessions? I am asking because by the post, you seem to care about your privacy

        • Washedupcynic@lemmy.caOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 天前

          The zoom sessions weren’t being recorded, or being analyzed by AI to create a transcript. I met with my DR via zoom, and the DR took notes.

          • WhyJiffie@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 天前

            I understand that. my point is that zoom has access to the video and audio feed in transit. Despite them being very popular, they have lied about their systems without pause when they became big during covid, including that their system is end to end encrypted, which it is not.

            there are better alternatives for it but unfortunately only a little fraction of the people know about them.

            to be clear I support you if you are looking for preserving your privacy with this AI transcription, just wanted to let you know that information was already leaking, even if laws have baselessly believed that they did not.

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 天前

      what the hell! the question is nothing crazy, mindlessly accepting it is what is crazy! it is a hard situation to be in forsure, but it seems you have zero ideas about the consequences if you think its just “some AI garbage that is just transcribing notes”

  • stringere@sh.itjust.works
    link
    fedilink
    arrow-up
    25
    ·
    8 天前

    No. Absolutely not. I csnnot trust any current AI model with HIPPA compliance.

    Find another doctor. I just had to fire my therapist because when I went in for this week’s appointment they were playing some jesus worship service and song. I told her that it was our last session because I no longer had trust in their offices and added that I had no faith any progress would ever be made after I was triggered waiting to see my therapist. It could have been the receptionists choice in music or someone else from their office but since they do not advertise as a faith based therapy group they should have left that shit at home or should expect more of the same from people like me.

    • BanMe@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      8 天前

      It’s worth researching a therapist’s credentials, some states allow “pastoral counseling degrees” and so on to be a path to “mental health therapist.” You want LISW, a licensed social worker. I’m not saying there aren’t weirdos, or that your experience wouldn’t happen with a social worker… just that many folks don’t realize some therapists went to theology classes instead of psychology classes, which is a prime setup for problems.

      • Tollana1234567@lemmy.today
        link
        fedilink
        arrow-up
        4
        ·
        8 天前

        probably better to look for a licensed psychologist/psychiatrist, or someone with a PsyD. dont really want to risk when someone isnt in the field.

      • stringere@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        8 天前

        I didn’t know about the theology to therapist route. My therapist herself never indicated their faith leanings, so credit due to them there. They have a Masters and are a LPC. As I mentioned before, it’s entirely possible she had nothing to do with nor endorses the music choice in the building, but tacit endorsement by not stopping it from happening is enough for me to leave.

        Maybe, just maybe, let’s not play music from the loudest hate group in the USA in the lobby of the therapist office.

    • Washedupcynic@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      2
      ·
      9 天前

      I can, but in truth I don’t care. I don’t want my data being used to train AI, and I don’t want my treatment to be guided by AI.

      • scrollo@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        5
        ·
        9 天前

        The “fine print” you added doesn’t say the automated transcript will be used for training a model. I’d highly, highly doubt HIPAA protected clinic notes would be use for training an LLM. If they did, the clinic would go bankrupt from lawsuits.

        Also, if they only use AI for automated transcription, would you feel the same instead of “AI” it were a dedicated automated transcription tool?

        If you abhor all things AI, your feelings of not continuing with this clinic are valid. However, I don’t think they are using AI in ways you think they are.

        • [deleted]@piefed.world
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          2
          ·
          8 天前

          I’d highly, highly doubt HIPAA protected clinic notes would be use for training an LLM

          7areDwj5zOwv64i.jpg

        • WhyJiffie@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 天前

          If they did, the clinic would go bankrupt from lawsuits.

          for that, patients would need to be able to prove that their data was used. how would you be able to prove it?

          • Washedupcynic@lemmy.caOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 天前

            Being disappeared for being mentally ill, trans, or gay, which conservatives would love to have rebranded as mental illness. Assuming you had a lawyer on retainer before you were disappeared, and family willing to fight for you while you languish in a concentration camp.

      • Scratch@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        ·
        9 天前

        So ask about those two specific points.

        And in the session you can (probably) go over the generated notes with your doctor to double check.

        The term AI is very broad and generic, today it’s used to refer to LLMs and fancy denoisers. But AI has been around for decades in some form or another. My point is, speech transcription has been around longer than the current LLM fad, so it might not be an LLM doing your transcription. Would that allay some of your concerns?

        • BlindFrog@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          8 天前

          If it were a locally ran transcription software, would a healthcare provider still be required to ask your permission to use it?

          • WhyJiffie@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            8 天前

            I very much hope so, because in neither case can thry guarantee that the data won’t be transferred elsewhere

      • KombatWombat@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        8 天前

        It doesn’t sound like AI is being used for either. It’s just summarizing the encounter at the end as a note, and not storing any data to train on.

    • oneser@lemmy.zip
      link
      fedilink
      arrow-up
      7
      ·
      9 天前

      And to piggy back this question: what alternatives do you have and are they actually viable?

      • Washedupcynic@lemmy.caOP
        link
        fedilink
        English
        arrow-up
        11
        ·
        9 天前

        The alternative is finding a different provider. I already have a long list of offices to call. Getting a list together was the first thing I did when they notified me about rolling out this app.

  • leadore@lemmy.world
    link
    fedilink
    arrow-up
    22
    ·
    8 天前

    I feel very strongly about this and I would change doctors. But of course it won’t be long before they all do this and we’ll have no alternative. The two biggest problems I see are

    1. I saw a news story where a doctor who uses this said it saves her time because before seeing the patient she gets an AI summary of their chart, so she doesn’t have to “go through several tabs” to read the actual information. Oh great, let the statistical probability text generator hallucinate up some shit about what’s in a person’s chart, to save 10 seconds of tab-clicking to read the ACTUAL patient records! If they want a summary there’s no reason a traditional report or summary screen couldn’t be programmed to pull data out of the most important fields and arranging them in the desired format.

    2. THEN the doctor uses her damn phone to record your visit, everything you say, and that gets run through the AI which generates a visit summary and puts that into your medical records. So, god only knows what 3rd party private corporate vulture has access to your doctor/patient conversations and what they’ll do with them, and again, what hallucinated shit will get put into your medical records!

    So your doctor never reads your chart and never writes your chart! [Readacted] me now! Also what happens after a few iterations of an AI summarizing records that an AI wrote?

    • sem@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      9
      ·
      8 天前

      If you buy into the story that “someday they’ll all be using it” you are doing the AI boosters’ job for them. It is not a foregone conclusion, and there is no reason to accept that future.

      • leadore@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        8 天前

        I hope you’re right! The magical thinking and child-like trust in this tech by otherwise intelligent people is scary though.

    • Cellari@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      8 天前

      AI is really good at concepts, not logic. But even then the performance is going to be dependant of the data it was modelled with.

      You can ask for a specific symptom of pneumonia and it can answer. You can also ask for a summary of pneumonia, as someone has most likely wrote one already and AI understand to use it because of the concept relevance. But if you ask it to summarize a patient information, it will split the patient information into blocks it can summarise based on what summarisation information it has in the model data. I can assure you it cannot ever have all the possibilities pretrained already.

      • leadore@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        8 天前

        My fear is that the models merge all kind of patient record info together as the statistical model so the ‘summaries’ will write the most likely word to come next in the phrase, so wrong information and incorrect diagnoses will be recorded into a person’s record, or that important information will be omitted.

        I predict that people will be harmed or die because of missing or false information patient records. But it will be difficult for the public to find out about it because of privacy issues and the unwillingness of institutions to acknowledge it.

        Drugs have to go through multiple stages of testing and trials before they’re allowed to be used on patients. But no one is doing any kind of testing on the effects of this at all, let alone controlled trial rollouts with review, before allowing general use.

  • Victoria@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    20
    ·
    9 天前

    AI is an overloaded marketing term. Definitely ask which kind of AI, how it is used, how and which of your data is going to be used.