Skip Navigation

AI fakes spread disinformation. Is the distrust they create even worse?

AI fakes spread disinformation. Is the distrust they create even worse?

The primary aims of images like the ones targeting Omar and Armstrong are, of course, to harass, demean and discredit political opponents. But they have a secondary effect: to overwhelm the internet with unusable, false, unstable information, and then to mock, as Jackson did, the idea of finding out what’s true at all.

This type of bad information can make it genuinely difficult for people to figure out what’s real and what’s worth engaging with. And as AI-generated images like these become increasingly convincing, a new danger is emerging: that when people don’t know what is AI and what isn’t, they will distrust everything they see equally.

At that point, DiResta says, people can start to believe that “nothing is true and everything is possible,” a phrase coined by journalist Peter Pomerantsev in his 2015 book about working in Russian TV news.

The main risk in that circumstance, DiResta says, “is that you see trust fragment along very partisan lines. This has already happened to an extent. People come to believe something is true or not based on who says it.” The most successfully manipulative fake images, DiResta adds, convince people to share them quickly—before their brains can do the work of assessing whether they’re real.

Comments

4