A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography, highlighting the danger and ubiquity of generative AI being used for nefarious reasons.
Phillip Michael McCorkle was arrested last week while he was working at a movie theater in Vero Beach, Florida, according to TV station CBS 12 News. A crew from the TV station captured the arrest, which made for dramatic video footage due to law enforcement leading away the uniform-wearing McCorkle from the theater in handcuffs.
One doesn’t need to browse AI generated images for longer than 5 seconds to realize it can generate a ton of stuff that you for absolute certainty can know wasn’t on the training data. I don’t get why people insist on the narrative that it can only output copies of what it has already seen. What’s generative about that?
If you took a minute to read the article:
So not only do the online models have CSAM, but people are downloading open source software and I’d be very surprised if they weren’t feeding it CSAM
That doesn’t dispute my argument; generative AI can create images that are not in the training data. It doesn’t need to know what something looks like as long as the person using it does and can write the correct prompt for it. The corn dog I posted below is a good example. You can be sure that wasn’t in the training data yet it was still able to generate it.
Online models since that discovery have scrubbed the offending sources and retrained, as well as added safeguards to their models to try and prevent it.