For years, Facebook and other social media companies have erred on the side of lenience in policing their sites — allowing most posts with false information to stay up, as long as they came from a genuine human and not a bot or a nefarious actor.
Now, the companies are considering a fundamental shift with profound social and political implications: deciding what is true and what is false.
The new approach, if implemented, would not affect every lie or misleading post. It would be meant only to rein in manipulated media — everything from sophisticated, AI-enabled video or audio deepfakes to super-basic video edits like a much-circulated, slowed-down clip of Nancy Pelosi that surfaced in May.
Still, it would be a significant concession to critics who say the companies have a responsibility to do much more to keep harmful false information from spreading unfiltered.
MacDailyNews Take: What could go wrong? And what happens when deepfakes progress past the point of detection, if they haven’t already?