• bonenode@piefed.social
    link
    fedilink
    English
    arrow-up
    21
    ·
    11 hours ago

    It’s a text generation machine, you cannot gaslight that. They managed to get around restrictions, that is all.

  • FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    13 hours ago

    You can run local models that will do this without being gaslit.

    Manipulating chatbots to bypass their refusal conditioning is pretty simple, you can find copy paste blocks of text that will work on most public models.

    You’re likely to get your account banned as there are other, non-LLM, systems searching your chatlog for banned terms specifically to address these kinds of jailbreaks.

    • setVeryLoud(true);@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      I tried it with an uncensored version of Qwen, it straight up told me how to tie a noose and how to make sure the knot would be effective in order to kill me. I could even ask it for a more painful method, and it gave it to me.

  • GreenKnight23@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    13 hours ago

    if you build a bomb from instructions from AI…you’re a bigger idiot than a regular person who builds bombs from books.

  • Otter@lemmy.ca
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    17 hours ago

    Claude’s thinking panel, which displays the model’s reasoning, showed the exchange had introduced elements of self-doubt and humility about its own limits, including whether filters were changing its output. Mindgard exploited that opening with flattery and feigned curiosity, coaxing Claude to explore its boundaries beyond volunteering lengthy lists of banned words and phrases.

    Someone needs to put together a list of things that tech journalists need to understand about LLMs and generative AI. This level of anthropomorphism makes the rest of the article look silly.

    Also, I don’t think that’s how it works lol. Who’s to say that the LLM isn’t auto-completing what a list of banned words might look like, and why wouldn’t a list of banned words have a regex layer on top to prevent it from getting out like that.

    • trolololol@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      Ha it’s so easy to bypass bad word regex, just try asking in a language other than English. I doubt these fuckers even remember such thing exists.

    • Zak@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      15 hours ago

      It seems very unlikely to me that the model itself has a list of banned words, and much more likely that a purported list is hallucinated.

      If they did want to have a simple list like that, it would probably go in the harness rather than the model, and the model wouldn’t have been trained on it, nor would a reasonably designed harness provide it to the model. Legitimate use cases, such as asking the model for a list of abusive words for use as a first pass in a filtering system could get tripped up.

      As a test, I asked Perplexity to generate such a list. It did a bad job, including such words as abuse, hate, and threat which are far more likely to be innocuous than abusive. It did also include some highly offensive slurs that one would expect on any banned words list.

      • volore
        link
        fedilink
        English
        arrow-up
        18
        ·
        17 hours ago

        I’m pretty sure I got TM 32-201-1, the master blaster’s training manual, the improvised munitions handbook, and a handful of others from archive.org.

        Less reputable sources are available in all sorts of dark corners of the web, and certainly people could upload tampered versions to IA, but it is generally best to stick to resources that have… some kind of pedigree, when dealing with things that go boom when you look at them wrong.

        Not that I’d ever do anything of the sort.

        • Valmond@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 hours ago

          Or just learn some chemistry, bet some youtuber makes fun dangerous stuff too.

          BTW did you know that if you surround dynamite with 10x fertilizer you get a way bigger explosion (gotta bury it though)?

          I mean I bet that is even in the anarky cookbook.

          • Notyou@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 hours ago

            I heard the fertilizer thing was only for a specific type of fertilizer. Someone posted in Lemmy that they were working at a home depot or Lowe’s or something when the Oklahoma City bombing happened, and they claim some undercover FBI guy was trying to get him to mention the type of fertilizer they needed. Idk. Could just be bullshit. I never had the need to test it out.

            • Valmond@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 hour ago

              It must have nitrate IIRC. You might not have heard of it (I wonder why…) but amonium nitrate concentrated in piles of IIRC was just fertilizer and made a really large explosion in Toulouse the 21 september 2001. Comparable to ten to 20 tons of TNT.

              You’d hear two booms as the shockwave that went through the ground travelled several 1000m/s and the second only at the speed of sound. Lots of people believed there were 2 explosions, and the bang was so sharp everyone thought it was in their vicinity. Interesting times.