The criticism, in interviews, on Twitter, and in a hearing on Capitol Hill Wednesday, followed the unveiling of the social media giant’s policy to remove video forgeries produced with the aide of artificial intelligence that show people doing and saying things they never did. Deepfakes have raised the possibility of extensive political manipulation — a pressing concern going into the 2020 presidential election.
“Big tech failed to respond to grave threats posed by deepfakes as evidenced by Facebook scrambling to announce a new policy that strikes me as wholly inadequate,” Democratic Representative Jan Schakowsky of Illinois, who chairs the subcommittee that hosted a hearing Wednesday on online deception.
Some experts criticized Facebook’s zeroing in only on deepfake videos when similar technology can be used to manipulate audio recordings and text intended to mislead people.
“The manipulation of images and text and audio can be just as pernicious, and in fact even more so,” said Lindsay Gorman, a fellow for emerging technologies at the Alliance for Securing Democracy. “You could create completely automated fake news.”
How Deepfakes Make Disinformation More Real Than Ever: QuickTake
She and others said that Facebook’s proposal misses a bigger problem — disinformation that is easy to create and spread on digital platforms, as evidenced by Russia’s efforts to bolster Donald Trump’s 2016 presidential campaign through fraudulent personas, sharing of fake news and attempts to sow social divisions.
“Facebook wants you to think the problem is video-editing technology, but the real problem is Facebook’s refusal to stop the spread of disinformation,” said a tweet from Drew Hammill, deputy chief of staff to House Speaker Nancy Pelosi.
Facebook’s policy pledges to remove content that has been “edited or synthesized” beyond adjustments for quality or clarity and is deemed likely to mislead viewers. The new rules won’t apply to parody or satire.
“While these videos are still relatively rare on the internet, they present a significant challenge for our industry and society as their use increases,” said Monika Bickert, Facebook’s vice president of global policy management, in prepared testimony for the hearing.
Democratic Representative Adam Schiff, chairman of the House Intelligence Committee, said in a statement Tuesday that Facebook’s moves were “a sensible and responsible step.”
“As with any new policy, it will be vital to see how it is implemented, and particularly whether Facebook can effectively detect deepfakes at the speed and scale required to prevent them from going viral,” said Schiff, who urged YouTube and Twitter to take similar steps.
The policy left plenty of room for other types of manipulation, experts say, including long-available deceptive techniques that have featured in political attacks. A recent video that appeared on Twitter Inc., for instance, appeared to show former Vice President Joe Biden, who is leading polls for the Democratic presidential nomination, espousing white nationalist ideas.
The video simply edited a statement by Biden to make it seem as if he was saying that European traditions are superior to other cultures, according to reports. Such a video would apparently still be allowed on Facebook under the new policy.
Pelosi was also the subject of a hoax last year when video of her was slowed down to suggest falsely that she was slurring her words and impaired during an appearance. Facebook belatedly labeled the video as false, limiting its spread, but the company left up the media. The video would also apparently stay up under Facebook’s new policy because it did not rely on certain complex technological tools, and didn’t change her words — both components of the company’s Jan. 6 announcement.
Multiple Democrats on the committee cited the Pelosi and Biden videos as exemplifying their concerns, and animating their desire to tackle online manipulation, even as they discussed a range of possible solutions, including limiting a provision of U.S. law that shields platforms like Facebook from liability for most of the content that users post.
Experts also see these old fashioned fakes as an enduring threat to 2020, despite popular concerns around deepfakes.
“If I’m somebody who wants to interfere with U.S. elections, why go to the trouble of creating a sophisticated deepfake and putting a target on my back as opposed to just doing these simpler things?” said Hany Farid, a professor at the University of California, Berkeley, who met with Facebook on its policy.
Facebook’s Bickert said in her testimony that deceptive posts that don’t fall under the new policy might still run afoul of old rules, such as those that apply fact-checking to news stories, ban nudity or forbid fake accounts and coordination. (The vast majority of deepfakes are currently used to produce phony pornography, research has shown.)
But that wasn’t enough for some lawmakers who see the platform dealing with a flood of problems ahead of the 2020 elections, and an even bigger problem of online deceptions.
“I remain very concerned that Facebook is putting its bottom line ahead of addressing misinformation,“ said Representative Jerry McNerney, a California Democrat. Representative Yvette Clarke said she also planned to introduce a bill “to specifically address how online platforms deal with deepfake content,” although she didn’t elaborate.
“As these tools become more affordable and accessible, we can expect deepfakes to be used to influence financial markets, discredit dissidents and even incite violence,” said Clarke, a New York Democrat who previously introduced legislation requiring the labeling of deepfakes and other measures.