A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography, highlighting the danger and ubiquity of generative AI being used for nefarious reasons.

Phillip Michael McCorkle was arrested last week while he was working at a movie theater in Vero Beach, Florida, according to TV station CBS 12 News. A crew from the TV station captured the arrest, which made for dramatic video footage due to law enforcement leading away the uniform-wearing McCorkle from the theater in handcuffs.

  • macniel@feddit.org
    link
    fedilink
    arrow-up
    102
    arrow-down
    25
    ·
    4 months ago

    I don’t see how children were abused in this case? It’s just AI imagery.

    It’s the same as saying that people get killed when you play first person shooter games.

    Or that you commit crimes when you play GTA.

        • timestatic@feddit.org
          link
          fedilink
          English
          arrow-up
          20
          arrow-down
          1
          ·
          4 months ago

          But this is the US… and its kind of a double standard if you’re not arrested for drawing but for generating it.

            • ContrarianTrail@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              4 months ago

              The core reason CSAM is illegal is not because we don’t want people to watch it but because we don’t want them to create it which is synonymous with child abuse. Jailing someone for drawing a picture like that is absurd. While it might be of bad taste, there is no victim there. No one was harmed. Using generative AI is the same thing. No matter how much simulated CSAM you create with it, not a single child is harmed in doing so. Jailing people for that is the very definition of a moral panic.

              Now, if actual CSAM was used in the training of that AI, then it’s a more complex question. However it is a fact that such content doesn’t need to be in the training data in order for it to create simulated CSAM and as long as that is the case it is immoral to punish people for creating something that only looks like it but isn’t.

            • puppycat@lemmy.blahaj.zone
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              4 months ago

              I don’t advocate for either but it should NOT be treated the same. one doesn’t involve a child being involved and traumatized, id rather a necrophiliac make ai generated pics instead of… you know.

    • CeruleanRuin@lemmings.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      14
      ·
      edit-2
      4 months ago

      Not a great comparison, because unlike withh violent games or movies, you can’t say that there is no danger to anyone in allowing these images to be created or distributed. If they are indistinguishable from the real thing, it then becomes impossible to identify actual human victims.

      There’s also a strong argument that the availability of imagery like this only encourages behavioral escalation in people who suffer from the affliction of being a sick fucking pervert pedophile. It’s not methadone for them, as some would argue. It’s just fueling their addiction, not replacing it.

    • Leraje@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      26
      ·
      4 months ago

      The difference is intent. When you’re playing a FPS, the intent is to play a game. When you play GTA the intent is to play a game.

      The intent with AI generated CSAM is to watch kids being abused.

        • Leraje@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          11
          ·
          4 months ago

          There may well be the odd weirdo playing Call of Duty to watch people die.

          But everyone who watches CSAM is watching it to watch kids being abused.

      • ContrarianTrail@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        Punishing people for intending to do something is punishing them for thought crimes. That is not the world I want to live in.

            • Leraje@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              4 months ago

              Intent is defined as intention or purpose. So I’ll rephrase for you: the purpose of playing a FPS is to play a game. The purpose of playing GTA is to play a game.

              The purpose of AI generated CSAM is to watch children being abused.

              • ContrarianTrail@lemm.ee
                link
                fedilink
                English
                arrow-up
                3
                ·
                4 months ago

                I don’t think that’s fair. It could just as well be said that the purpose of violent games is to simulate real life violence.

                Even if I grant you that the purpose of viewing CSAM is to see child abuse, it’s still less bad than actually abusing them just like playing violent games is less bad than participating in real violence. Also, despite the massive increase in violent games and movies, the actual number of violence is going down so implying that viewing such content would increase the cases of child abuse is an assumption I’m not willing to make either.

                • Leraje@lemmy.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  4 months ago

                  The purpose of a game is to play a game through a series of objectives and challenges.

                  Even if I grant you that the purpose of viewing CSAM is to see child abuse

                  Very curious to hear what else you think the purpose of watching CSAM might be.

                  it’s still less bad than actually abusing them

                  “less bad” is relative. A bad thing is still bad. If we go by length of sentencing then rape is ‘less bad’ than murder. that doesn’t make it ‘not bad’.

                  so implying that viewing such content would increase the cases of child abuse is an assumption I’m not willing to make either.

                  OK?

                  I didn’t claim that AI CSAM increased anything at all. Literally all I’ve said is that the purpose of AI generated CSAM is to watch kids being abused.

                  Neither did I claim that violent games lead to violence. You invented that strawman all by yourself.

                  • ContrarianTrail@lemm.ee
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    ·
                    edit-2
                    4 months ago

                    A person said that there is no victim in creating simulated CSAM with AI just like there isn’t one in video games, to which you replied that the difference there is intention. The intention to play violent games is to play games when as with viewing CSAM it’s that your intention is to view abuse material.

                    Correct so far?

                    Ofcourse the intent is that. For what other reason would anyone want to see CSAM for, than to see CSAM? What kind of argument / conclusion is this supposed to be? How else am I supposed to interpret this than as you advocating for the crimimalization of creating such content despite the fact that no one is being harmed? How is that not pre-emptively punishing people for crimes they’ve yet to even commit? Nobody chooses to be born with such thoughts or desires, so I don’t see the point of punishing anyone for that alone.

    • Samvega@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      16
      arrow-down
      51
      ·
      4 months ago

      It’s just AI imagery.

      Fantasising about sexual contact with children indicates that this person might groom children for real, because they have a sexual interest in doing so. As someone who was sexually assaulted as a child, it’s really not something that needs to happen.

      • HelixDab2@lemm.ee
        link
        fedilink
        arrow-up
        87
        arrow-down
        3
        ·
        4 months ago

        indicates that this person might groom children for real

        But unless they have already done it, that’s not a crime. People are prosecuted for actions they commit, not their thoughts.

        • Chozo@fedia.io
          link
          fedilink
          arrow-up
          68
          arrow-down
          1
          ·
          4 months ago

          I agree, this line of thinking quickly spirals into Minority Report territory.

          • CeruleanRuin@lemmings.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            7
            ·
            4 months ago

            It will always be a gray area, and should be, but there are practical and pragmatic reasons to ban this imagery no matter its source.

      • HubertManne@moist.catsweat.com
        link
        fedilink
        arrow-up
        24
        arrow-down
        2
        ·
        4 months ago

        Seems like then fantasizing about shooting people or carjacking or such indcates that person might do that activity for real to. There are a lot of car jackings nowadays and you know gta is real popular. mmmm. /s but seriously im not sure your first statement has merit. Especially when you look at where to draw the line. anime. manga. oil paintings. books. thoughts in ones head.

        • CeruleanRuin@lemmings.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          13
          ·
          4 months ago

          If you’re asking whether anime, manga, oil paintings, and books glorifying the sexualization of children should also be banned, well, yes.

          This is not comparable to glorifying violence, because real children are victimized in order to create some of these images, and the fact that it’s impossible to tell makes it even more imperative that all such imagery is banned, because the existence of fakes makes it even harder to identify real victims.

          It’s like you know there’s an armed bomb on a street, but somebody else filled the street with fake bombs, because they get off on it or whatever. Maybe you’d say making fake bombs shouldn’t be illegal because they can’t harm anyone. But now suddenly they have made the job of law enforcement exponentially more difficult.

          • Cryophilia@lemmy.world
            link
            fedilink
            arrow-up
            6
            arrow-down
            1
            ·
            4 months ago

            Sucks to be law enforcement then. I’m not giving up my rights to make their jobs easier. I hate hate HATE the trend towards loss of privacy and the “if you didn’t do anything wrong then you have nothing to hide” mindset. Fuck that.

        • Samvega@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          1
          arrow-down
          16
          ·
          4 months ago

          If you want to keep people who fantasise about sexually exploiting children around your family, be my guest. My family tried that, and I was raped. I didn’t like that, and I have drawn my own conclusions.

          • HubertManne@moist.catsweat.com
            link
            fedilink
            arrow-up
            15
            ·
            4 months ago

            yeah and if you want to keep people who fantasize about murdering folk. you can’t say one thing is a thing without saying the other is. Im sorry you were raped but I doubt it would be stopped by banning lolita.

            • Samvega@lemmy.blahaj.zone
              link
              fedilink
              arrow-up
              1
              arrow-down
              10
              ·
              edit-2
              4 months ago

              I don’t recall Nabokov’s novel Lolita saying that sexualising minors was an acceptable act.

              Thanks for the strawman, though, I’ll save it to burn in the colder months.

              • HubertManne@moist.catsweat.com
                link
                fedilink
                arrow-up
                10
                ·
                4 months ago

                You can call it a strawman but doing something evil if its killing folks or raping folks the effect should be the same when discussing non actual and actual. You can say this thing is a special case but when it comes to freedom of speech, which is anything that is not based in actual events. writing, speaking, thinking, art. Special circumstances becomes a real slippery slope (which can also be brought up as a fallacy which like all “fallacies” depend a lot on what else backs them up on how they are being presented)

    • TallonMetroid@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      49
      ·
      4 months ago

      Well, the image generator had to be trained on something first in order to spit out child porn. While it may be that the training set was solely drawn/rendered images, we don’t know that, and even if the output were in that style, it might very well be photorealistic images generated from real child porn and run through a filter.

      • MagicShel@programming.dev
        link
        fedilink
        arrow-up
        51
        arrow-down
        3
        ·
        4 months ago

        An AI that is trained on children and nude adults can infer what a nude child looks like without ever being trained specifically with those images.

              • Cryophilia@lemmy.world
                link
                fedilink
                arrow-up
                3
                arrow-down
                1
                ·
                4 months ago

                No, I’m admitting they’re stupid for even bringing it up.

                Unless their argument is that all AI should be illegal, in which case they’re stupid in a different way.

                • LustyArgonian@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  edit-2
                  4 months ago

                  Do you think regular child porn should be illegal? If so, why?

                  Generally it’s because kids were harmed in the making of those images. Since we know that AI is using images of children being harmed to make these images, as the other posters has repeatedly sourced (but also if you’ve looked up deepfakes, most deepfakes are of an existing porn and the face just changed over top. They do this with CP as well and must use CP videos to seed it, because the adult model would be too large)… why does AI get a pass for using children’s bodies in this way? Why isn’t it immoral when AI is used as a middle man to abuse kids?

                  • Cryophilia@lemmy.world
                    link
                    fedilink
                    arrow-up
                    3
                    arrow-down
                    1
                    ·
                    4 months ago

                    Since we know that AI is using images of children being harmed to make these images

                    As I keep saying, if this is your reasoning then all AI should be illegal. It only has CP in its training set incidentally, because the entire dataset of images on the internet contains some CP. It’s not being specifically trained on CP images.

          • LustyArgonian@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            edit-2
            4 months ago

            Yes exactly. That people are then excusing this with “well it was trained on all.public images,” are just admitting you’re right and that there is a level of harm here since real materials are used. Even if they weren’t being used or if it was just a cartoon, the morality is still shaky because of the role porn plays in advertising. We already have laws about advertising because it’s so effective, including around cigarettes and prescriptions. Most porn, ESPECIALLY FREE PORN, is an ad to get you to buy other services. CP is not excluded from this rule - no one gets free lunch, so to speak. These materials are made and hosted for a reason.

            The role that CP plays in most countries is difficult. It is used for blackmail. It is also used to generate money for countries (intelligence groups around the world host illegal porn ostensibly “to catch a predator,” but then why is it morally okay for them to distribute these images but no one else?). And it’s used as advertising for actual human trafficking organizations. And similar organizations exist for snuff and gore btw. And ofc animals. And any combination of those 3. Or did you all forget about those monkey torture videos, or the orangutan who was being sex trafficked? Or Daisy’s Destruction and Peter Scully?

            So it’s important to not allow these advertisers to combine their most famous monkey torture video with enough AI that they can say it’s AI generated, but it’s really just an ad for their monkey torture productions. And even if NONE of the footage was from illegal or similar events and was 100% thought of by AI - it can still be used as an ad for these groups if they host it. Cartoons can be ads ofc.

        • Saledovil@sh.itjust.works
          link
          fedilink
          arrow-up
          6
          ·
          4 months ago

          Wild corn dogs are an outright plague where I live. When I was younger, me and my buddies would lay snares to catch to corn dogs. When we caught one, we’d roast it over a fire to make popcorn. Corn dog cutlets served with popcorn from the same corn dog is popular meal, especially among the less fortunate. Even though some of the affluent consider it the equivalent to eating rat meat. When me pa got me first rifle when I turned 14, I spent a few days just shooting corn dogs.

        • emmy67@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          4 months ago

          It didn’t generate what we expect and know a corn dog is.

          Hence it missed because it doesn’t know what a “corn dog” is

          You have proven the point that it couldn’t generate csam without some being present in the training data

          • ContrarianTrail@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            edit-2
            4 months ago

            I hope you didn’t seriously think the prompt for that image was “corn dog” because if your understanding of generative AI is on that level you probably should refrain from commenting on it.

            Prompt: Photograph of a hybrid creature that is a cross between corn and a dog

            • emmy67@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              4 months ago

              Then if your question is “how many Photograph of a hybrid creature that is a cross between corn and a dog were in the training data?”

              I’d honestly say, i don’t know.

              And if you’re honest, you’ll say the same.

              • ContrarianTrail@lemm.ee
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                4 months ago

                But you do know because corn dogs as depicted in the picture do not exists so there couldn’t have been photos of them in the training data, yet it was still able to create one when asked.

                This is because it doesn’t need to have been seen one before. It knows what corn looks like and it knows what a dog looks like so when you ask it to combine the two it will gladly do so.

                • emmy67@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  4 months ago

                  But you do know because corn dogs as depicted in the picture do not exists so there couldn’t have been photos of them in the training data, yet it was still able to create one when asked.

                  Yeah, except photoshop and artists exist. And a quick google image search will find them. 🙄

                  • ContrarianTrail@lemm.ee
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    1
                    ·
                    4 months ago

                    And this proves that AI can’t generate simulated CSAM without first having seen actual CSAM how, exactly?

                    To me, the takeaway here is that you can take a shitty 2 minute photoshop doodle and by feeding it thru AI it’ll improve the quality of it by orders of magnitude.

      • lunarul@lemmy.world
        link
        fedilink
        arrow-up
        17
        arrow-down
        1
        ·
        edit-2
        4 months ago

        we don’t know that

        might

        Unless you’re operating under “guilty until proven innocent”, those are not reasons to accuse someone.

    • KillerTofu@lemmy.world
      link
      fedilink
      arrow-up
      16
      arrow-down
      54
      ·
      4 months ago

      How was the model trained? Probably using existing CSAM images. Those children are victims. Making derivative images of “imaginary” children doesn’t negate its exploitation of children all the way down.

      So no, you are making false equivalence with your video game metaphors.

      • fernlike3923@sh.itjust.works
        link
        fedilink
        arrow-up
        60
        arrow-down
        1
        ·
        4 months ago

        A generative AI model doesn’t require the exact thing it creates in its datasets. It most likely just combined regular nudity with a picture of a child.

        • finley@lemm.ee
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          22
          ·
          4 months ago

          In that case, the images of children were still used without their permission to create the child porn in question

          • MagicShel@programming.dev
            link
            fedilink
            arrow-up
            33
            arrow-down
            4
            ·
            4 months ago

            That’s not really a nuanced take on what is going on. A bunch of images of children are studied so that the AI can learn how to draw children in general. The more children in the dataset, the less any one of them influences or resembles the output.

            Ironically, you might have to train an AI specifically on CSAM in order for it to identify the kinds of images it should not produce.

          • fernlike3923@sh.itjust.works
            link
            fedilink
            arrow-up
            7
            arrow-down
            2
            ·
            4 months ago

            That’s a whole other thing than the AI model being trained on CSAM. I’m currently neutral on this topic so I’d recommend you replying to the main thread.

          • CeruleanRuin@lemmings.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            6
            ·
            4 months ago

            Good luck convincing the AI advocates of this. They have already decided that all imagery everywhere is theirs to use however they like.

      • macniel@feddit.org
        link
        fedilink
        arrow-up
        28
        arrow-down
        1
        ·
        4 months ago

        Can you or anyone verify that the model was trained on CSAM?

        Besides a LLM doesn’t need to have explicit content to derive from to create a naked child.

        • KillerTofu@lemmy.world
          link
          fedilink
          arrow-up
          5
          arrow-down
          29
          ·
          4 months ago

          You’re defending the generation of CSAM pretty hard here in some vaguely “but no child we know of” being involved as a defense.

          • macniel@feddit.org
            link
            fedilink
            arrow-up
            15
            arrow-down
            2
            ·
            4 months ago

            I just hope that the Models aren’t trained on CSAM. Making generating stuff they can fap on ““ethical reasonable”” as no children would be involved. And I hope that those who have those tendancies can be helped one way or another that doesn’t involve chemical castration or incarceration.

      • Diplomjodler@lemmy.world
        link
        fedilink
        arrow-up
        13
        arrow-down
        1
        ·
        4 months ago

        While i wouldn’t put it past Meta&Co. to explicitly seek out CSAM to train their models on, I don’t think that is how this stuff works.

      • grue@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        8
        ·
        4 months ago

        But the AI companies insist the outputs of these models aren’t derivative works in any other circumstances!