• Chozo@kbin.social
    link
    fedilink
    arrow-up
    220
    arrow-down
    13
    ·
    11 months ago

    If you paste plaintext passwords into ChatGPT, the problem is not ChatGPT; the problem is you.

      • Ganbat@lemmyonline.com
        link
        fedilink
        English
        arrow-up
        63
        arrow-down
        5
        ·
        11 months ago

        Did you read the article? It didn’t. Someone received someone else’s chat history appended to one of their own chats. No prompting, just appeared overnight.

          • Ganbat@lemmyonline.com
            link
            fedilink
            English
            arrow-up
            9
            ·
            11 months ago

            Well, yeah, but the point is, ChatGPT didn’t “remember and then leak” anything, the web service exposed people’s chat history.

            • wildginger@lemmy.myserv.one
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              Well, that depends. Do you mean gpt the specific chunk of lln code? Or do you mean gpt the website and service?

              Because while the nitpicking details matter to the programmers fixing it, how much does that distinction matter to you or I, the laymen using the site?

      • GBU_28@lemm.ee
        link
        fedilink
        English
        arrow-up
        7
        ·
        11 months ago

        A huge value add of.chatgpt is that you can have running, contextual conversation. That requires memory.

        • GamingChairModel@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          11 months ago

          All of these LLMs should have walls between individual users, though, so that the chat history of one user is never accessible to any other user. Applying some kind of restriction to the LLM training and how chats are used is a conversation we can have, but the article and the example given is a much, much simpler problem that a user checking his own chat history was able to see other user’s chats.

        • Farid@startrek.website
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          edit-2
          11 months ago

          It doesn’t actually have memory in that sense. It can only remember things that are in the training data and within its limited context (4-32k tokens, depending on model). But when you send a message, ChatGPT does a semantic search of everything in the conversation and tries to fit the relevant parts inside the context, if there’s room.

          • GBU_28@lemm.ee
            link
            fedilink
            English
            arrow-up
            6
            ·
            edit-2
            11 months ago

            I’m familiar, it’s just easiest for the layman to consider the model having “memory” as historical search is a lot like it at arm’s length

  • lowleveldata@programming.dev
    link
    fedilink
    English
    arrow-up
    144
    arrow-down
    10
    ·
    11 months ago

    ChatGPT doesn’t leak passwords. Chat history is leaking which one of those happens to contain a plain text password. What’s up with the current trend of saying AI did this and that while the AI really didn’t?

    • Xyphius@lemmy.ca
      link
      fedilink
      English
      arrow-up
      63
      arrow-down
      1
      ·
      11 months ago

      you can go hunter2 my hunter2-ing hunter2.

      haha, does that look funny to you?

    • DreamButt@lemmy.world
      link
      fedilink
      English
      arrow-up
      41
      ·
      11 months ago

      Back in the RuneScape days people would do dumb password scams. My buddy was introducing me to the game. We were sitting in his parents garage and he was playing and showing me his high lvl guy. Anyway, he walks around the trading area and someone says something like “omg you can’t type your password backwards *****”. In total disbelief he tries it out. Instantly freaks out, logs out to reset his password, and fails due to to the password already being changed

  • webghost0101@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    113
    arrow-down
    5
    ·
    11 months ago

    So what actually happened seems to be this.

    • a user was exposed to another users conversation.

    thats a big ooof and really shouldn’t happen

    • the conversations that where exposed contained sensitive userinformation

    unresponsible user error, everyone and their mom should know better by now

    • foggy@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      11 months ago

      Yeah you gotta treat chat GPT like it’s a public GitHub repository.

    • ZzyzxRoad@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      7
      ·
      11 months ago

      Why is it that whenever a corporation loses or otherwise leaks sensitive user data that was their responsibility to keep private, all of Lemmy comes out to comment about how it’s the users who are idiots?

      Except it’s never just about that. Every comment has to make it known that they would never allow that to happen to them because they’re super smart. It’s honestly one of the most self-righteous, tone deaf takes I see on here.

      • summerof69@lemm.ee
        link
        fedilink
        English
        arrow-up
        10
        ·
        11 months ago

        I don’t support calling people idiots, but here’s that: we can’t control whether corporations leak our data or not, but we can control whether we share our password with ChatGPT or not.

      • pearsaltchocolatebar@discuss.online
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        11 months ago

        Because that’s what the last several reported “breaches” have been. There’s been a lot of accounts that were compromised by an unrelated breach, but the users re-used the passwords for multiple accounts.

        In this case, ChatGPT clearly tells you not to give it any sensitive information, so giving it sensitive information is on the user.

      • Ookami38@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 months ago

        Data loss or leaks may not be the end user’s fault, but it is their responsibility. Yes, open AI should have had shit in place for this to never have happened. Unfortunately, you, I, and the users whose passwords were leaked have no way of knowing what kinds of safeguards on my data they have in place.

        The only point of access to my information that I can control completely is what I do with it. If someone says “hey, don’t do that with your password” they’re saying it’s a potential safety issue. You’re putting control of your account in the hands of some entity you don’t know. If it’s revealed, well, it’s THEIR fault, but you also goofed and should take responsibility for it.

      • stoly@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        11 months ago

        Because people who come to Lemmy tend to be more technical and better on questions of security than the average population. For most people around here, much of this is obvious and we’re all tired of hearing this story over and over while the public learns nothing.

        • HelloHotel@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          11 months ago

          Your frustration is valid. Also calling people stupid is an easy mistake that a lot of prople make, its easy to do.

          • stoly@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            11 months ago

            Well I’d never use the term to describe a person–it’s unnecessarily loaded. Ignorant, naive, etc might be better.

            • HelloHotel@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              11 months ago

              Good to hear, I dont know what ment to say but it lools like I accedently (and reductively) summerized your point while being argumentitive. 🫤 oops.

      • webghost0101@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        11 months ago

        To be fair i think many ai user including myself have at times overshared beyond what is advised. I never stated to be flawless but that doesn’t absolve responsibility.

        I do the same oversharing here on lemmy. But what i indeed don’t do is sharing real login information, real name, ssn or adress

        Open ai is absolutely still to blame For leaking users conversations but even if it wasn’t leaked that data will be used for training and should never have been put in a prompt.

    • Rand0mA@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      11 months ago

      Maybe it has something to do with being retrained/finetuned on conversations its having

  • Ganbat@lemmyonline.com
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    3
    ·
    edit-2
    11 months ago

    They weren’t there when I used ChatGPT just last night (I’m a pretty heavy user). No queries were made—they just appeared in my history, and most certainly aren’t from me (and I don’t think they’re from the same user either).

    This sounds more like a huge fuckup with the site, not the AI itself.

    Edit: A depressing amount of people commenting here obviously didn’t read the article…

    • 𝚝𝚛𝚔@aussie.zone
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      11 months ago

      Edit: A depressing amount of people commenting here obviously didn’t read the article…

      Every time

    • Feathercrown@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      11 months ago

      To be fair the article headline is a straight up lie. OpenAI leaked it by sending a user someone else’s chat history, ChatGPT didn’t leak anything.

      • GamingChairModel@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        11 months ago

        The ChatGPT service leaked the data. Maybe that can be attributed to the OpenAI organization that owns and operates ChatGPT, too, but it’s not “a straight up lie” to say that ChatGPT leaked information, when ChatGPT is the name of both the service and the LLM that powers the interesting part of that service.

      • HelloHotel@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        11 months ago

        Stupid is too harsh. They could be as intelegent as you or me. but… they are told propaganda/marketing, the thing is made to hide its rough edges and the hype from the propaganda machine puts people in a hazey mindset where its hard to think.

        • baseless_discourse@mander.xyz
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          11 months ago

          They could be as intelligent as you or me.

          They are certainly pretty stupid if they are as intelligent as me.

        • doctorcrimson@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          6
          ·
          edit-2
          11 months ago

          I think the average person is not very smart, especially considering the USA, Russia, China, and India are large parts of the world population. Now realize that half of everyone below median intellect is even dumber than that. The fact that propaganda and hype are highly effective to start with is evidence of our lacking capabilities as a species.

      • stoly@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        I had a student graduate recently who told me that he thought that technology just worked before joining my team of computer lab managers. I suspect that people think that tech in general JUST GOES.

          • SparrowRanjitScaur@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            4
            ·
            edit-2
            11 months ago

            No need for personal attacks. Since you won’t define it I will:

            The ability to acquire and apply knowledge and skills (from Oxford Languages)

            I would argue this applies to ChatGPT. ChatGPT exists under the hood as a neutral network, and is clearly capable of acquiring knowledge during training. And ChatGPT is also clearly capable of applying that knowledge in producing answers to questions or novel solutions to problems.

            Based on this definition, I would argue that ChatGPT is intelligent. Whether ChatGPT is sentient or not is a very different question. I would argue not, but again, that depends on the definition of sentience.

            • doctorcrimson@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              4
              ·
              11 months ago

              Hey bud I’ve got a hint for you to take, behold the list of people who wanted to have this conversation with your stupid socially inept ass:

                • doctorcrimson@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  11 months ago

                  I feel like I had to go at least that hard since they continued their bullshit even after the first insulting one liner. Clearly they’re too dense to screw off otherwise.

  • HiramFromTheChi@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    1
    ·
    11 months ago

    It also literally says to not input sensitive data…

    This is one of the first things I flagged regarding LLMs, and later on they added the warning. But if people don’t care and are still gonna feed the machine everything regardless, then that’s a human problem.

    • NaoPb@eviltoast.org
      link
      fedilink
      English
      arrow-up
      10
      ·
      11 months ago

      Hello can you help me, my password is such and such and I can’t seem to login.

      • MystikIncarnate@lemmy.ca
        link
        fedilink
        English
        arrow-up
        11
        ·
        11 months ago

        People literally do this though. I work in IT and people have literally said, out loud, with people around that can hear what we’re saying clearly, this exact thing.

        I’m like… I don’t want your password. I never want your password. I barely know what my password is. I use a password manager.

        IT should never need your password. Your boss and work shouldn’t need it. I can log in as you without it most of the time. I don’t, because I couldn’t give any less of a fuck what the hell you’re doing, but I can if I need to…

        If your IT person knows what they’re doing, most of the time for routine stuff, you shouldn’t really see them working, things just get fixed.

        Gah.

        • Wogi@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          11 months ago

          Lmao my IT guy asks for our passwords to certain things on an annual basis, stores them as plain text in a fucking email.

          First Time he did it I was like “uhh, not supposed to share that?” And he just insisted he needed it. Whatever, he wants to log in to my Autodesk account he’s free to. Not sure how much damage he could do.

          • MystikIncarnate@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            ·
            11 months ago

            That’s the problem, right there.

            Companies either don’t allow for IT oversight of accounts or charge more for accounts that can be overseen. Companies don’t want to pay the extra, if that’s even an option on the platform, so some passwords end up being fairly common knowledge among the IT staff.

            As for your computer login? No thanks. Microsoft has been built pretty much from the ground up to be administratable. I can get into your files, check what you’re running, extract data, modify your settings, adjust just about anything I want if I know what I’m doing. All without you realizing that I’ve done anything.

            Companies like Autodesk really don’t have that kind of oversight available for accounts that they’re willing to provide to an administrator that’s managing your access. I should be able to list the license that you’ve been given, download whatever software that license is associated to, and purchase/apply new licensing, all from a central control panel for the company under my own administrative user account for their site, whether I’m assigned any software/licensing or not. They don’t. It makes my job very complicated when that’s the case.

            In the event you brick your computer (or lose it, or destroy it, or something… Whether intentional or not), I sometimes need your password to go download your software and install it, then apply your license to it, so that it’s ready to go when you get your system back. You might lose any customizations, but you’ll at least have the tools to do the job.

            On the flip side, an example of good access is with Microsoft 365. You’re having a problem finding an email, I can trace the message in the control panel, get it’s unique ID, set your mailbox to provide myself full access to see it, then switch mailboxes to yours, while I’m still signed in as myself, find the message you accidentally moved into the draft messages folder and move it back to your inbox. Then remove my access and the message just appears in your inbox without you doing anything. I didn’t need to talk to you, I didn’t need your password… Nothing. No interaction, just fixed.

            There’s hundreds of examples of both good and bad administrative access, and it varies dramatically depending on the software vendor. In a perfect world I would have tools like what I get from exchange online for all the software and tools you use. Fact is, most companies are just too lazy to do it, instead of paying the developers to do things well, they’d rather give the money to their shareholders and let us IT folks suffer. They don’t give a shit about us.

  • AlternatePersonMan@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    8
    ·
    11 months ago

    And Google is bringing AI to private text messages. It will read all of your previous messages. On iOS? Better hope nothing important was said to anyone with an Android phone (not that I trust Apple either).

    The implications are terrifying. Nudes, private conversations, passwords, identifying information like your home address, etc. There’s a lot of scary scenarios. I also predict that Bard becomes closet racist real fast.

    We need strict data privacy laws with teeth. Otherwise corporations will just keep rolling out poorly tested, unsecured, software without a second thought.

    AI can do some cool stuff, but the leaks, misinformation, fraud, etc., scare the shit out of me. With a Congress aged ~60 years old on average, I’m not counting on them to regulate or even understand any of this.

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      11 months ago

      Fuck google but i do consider it an incompetence on openai’s part that conversations get exposed. That stuff really shouldn’t be possible with proper build software.

      If any personal information gets exposed by google ai its gonna be for their own analytics and their third partners. No one else.

    • cm0002@lemmy.world
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      5
      ·
      11 months ago

      You could just watch what you input into it lol ChatGPT is a pretty good tool to have in the toolkit and like any tool there’s warnings and cautions on its use.

      • tsonfeir@lemm.ee
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        16
        ·
        11 months ago

        It’s an amazing tool. I think it’s funny how many people fight it tooth and nail. I like to think they’re the kind of person who refused to use spell check, or the touch tone phone.

        • gravitas_deficiency@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          40
          arrow-down
          6
          ·
          11 months ago

          There are very valid philosophical and ethical reasons not to use it. We’re not just being luddites for the hell of it. In many cases, we’re engineers and scientists with interest, experience, or expertise in neural nets and LLMs ourselves, and we don’t like how fast and loose (in a lot of really, really important ways) all these big companies are playing it with the training datasets, nor how they’re actively disregarding any sort of legal or ethical responsibility around the technology writ large.

          • tsonfeir@lemm.ee
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            12
            ·
            11 months ago

            Likewise. The same could be said about every technology.

            • Feathercrown@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              11 months ago

              Uh, no. Why would that be the case? Every technology has unique upsides and downsides and the downsides of this one are not being handled correctly and are in fact being exacerbated.

        • Ilovethebomb@lemm.ee
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          2
          ·
          11 months ago

          I’m not against chat GPT or other AI, but I am thoroughly sick of hearing about it.

      • JustUseMint@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        11 months ago

        Absolutely. Host your own. Like the other person said, Hugging Face and look upon llama.cpp as well, vicuna wizard uncensored probably spelled that wrong

      • BananaOnionJuice@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 months ago

        I finally found some offline ones jan.ai and koboldcpp you download the GGUF model and run everything from your own pc, it just takes a lot of CPU and GPU for it to work acceptable, my setup can’t really manage much more than a model with 7B.

    • zeluko@kbin.social
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      11 months ago

      To be fair, they are talking about the OpenAI end user version, not the models themselves.
      Its still sketchy to send your data willingly to them and hope because you pay per request, its not getting tracked and saved.
      My company is deep into microsoft, so we all get Bing Chat Enterprise.
      Microsoft says it doesnt store anything and runs on separate systems… i guess with a company-offer they are more likely to put more protections in place because a breach would mean real consequences.
      (opposed to a breach with end-users, most of which dont care or would ever go through the legal trouble)

  • realharo@lemm.ee
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    11 months ago

    Not directly related, but you can disable chat history per-device in ChatGPT settings - that will also stop OpenAI from training on your inputs, at least that’s what they say.

  • NedRyerson@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    11 months ago

    Who knew everyone had the same password as me? I always thought I was the only ‘hunter2’ out there!

      • Gordito@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        11 months ago

        Me too! I see

        “Who knew everyone had the same password as me? I always thought I was the only ‘*******’ out there!”

        Lemmy rocks!

    • HelloHotel@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      9 months ago

      I absolutely agree. Use somthing like ollama. do keep in mind that it takes a lot of compiting resources to run these models. About 5GB ram and about 3GB filesize for the smaller sized ollama-unsensored.

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        11 months ago

        It’s not great, but an old GTX GPU can be had cheaply if you look around refurb, as long as there is a warranty, you’re gold. Stick it into a 10 year old Xeon workstation off eBay, you can have a machine with 8 cores, 32GB RAM and a solid GPU cheaply under $200 easily.

        • HelloHotel@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          11 months ago

          Its the RAM requirement that stings rn, I beleave ive got the specs but was told or misremember a 64 GB ram requirement for a model.

          • LainTrain@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            11 months ago

            IDK what you’ve read, but I have 24GB and can use Dreambooth and fine-tune Mistral no problem. RAM is only required to load the model briefly before it’s passed to VRAM iirc, and that’s the main deal, you need 8GB VRAM as an absolute minimum, even my 24GB VRAM is often not enough for some high end stuff.

            Plus RAM is actually really cheap compared to a GPU. Remember it doesn’t have to be super fancy RAM either, DDR3 is fine if you’re not gaming on a like a Ryzen or something modern

  • Fades@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    11 months ago

    Why the fuck would you give any AI your password??? People are so goddamn stupid