For years, when Meta launched new features for Instagram, WhatsApp and Facebook, teams of reviewers evaluated possible risks: Could it violate users’ privacy? Could it cause harm to minors? Could it worsen the spread of misleading or toxic content?
Until recently, what are known inside Meta as privacy and integrity reviews were conducted almost entirely by human evaluators.
Really? Humans? Maybe even qualified humans? Huh! Never would’ve thought that.
Set your timers. We’re going to hear about a non-ethical decision made by this system in 5, 4, 3, …
You act like they weren’t already making non-ethical decisions WITH humans.
What could possibility go wrong.
It’s the plot of the innumerable books. Give AI a bunch of laws and guidelines and how it misinterprets them with catastrophic consequences. Even today they don’t really know how LLM’s work. And they are going to give it control over sensitive areas.
And we all knew that they were going to do this at some point.
They already have with numerous problems such as peoples accounts and pages such as mine being removed with no way to appeal. Other than pay.
Lol