Facebook Vice President Integrity Guy Rosen wrote in blog post Sunday that the prevalence of hate speech on the platform had fallen by 50 percent in the past three years, and that “a story that the technology we use to combat hate speech is inadequate and that we are deliberately misrepresenting our progress” was false.
“We don’t want to see hate on our platform, nor do our users or advertisers, and we are transparent about our work to remove it,” Rosen wrote. “What these documents show is that our integrity work is a journey of several years. While we will never be perfect, our teams are constantly developing our systems, identifying problems and building solutions.”
The post appeared to be a response to a Sunday article in the Wall Street Journal, who said the Facebook employees tasked with keeping objectionable content away from the platform do not believe the company is able to reliably screen for it.
The WSJ report states that internal documents show that two years ago, Facebook shortened the time when human reviewers focused on hate speech complaints, and made other adjustments that reduced the number of complaints. That, in turn, helped create the appearance that Facebook’s artificial intelligence was more successful in enforcing the company’s rules than it actually was, according to the report. WSJ.
A team of Facebook employees found in March that the company’s automated systems removed posts that generated between 3 and 5 percent of views of hate speech on the social platform, and less than 1 percent of all content that violated the rules against violence and incitement, the WSJ reported.
But Rosen argued that focusing solely on removing content was “the wrong way to look at how we fight hate speech.” He says the technology to remove hate speech is just one method Facebook is using to combat it. “We need to be confident that something is hateful before we remove it,” Rosen said.
Instead, he said, the company believes a more important measure is to focus on the prevalence of hate speech that people actually see on the platform and how it is reducing it using various tools. He claimed that for every 10,000 views of a piece of content on Facebook, there were five views of hate speech. “Prevalence tells us what infringing content people see because we missed it,” Rosen wrote. “In this way we evaluate our progress most objectively, because it gives the most complete picture.”
But the internal documents obtained by the WSJ showed that a number of key pieces of content could evade Facebook’s detection, including videos of car accidents showing people with graphic injuries and violent threats against trans children.
The WSJ has produced a series of reports on Facebook based on internal documents from whistleblower Frances Haugen. She testified before Congress that the company was aware of the negative impact the Instagram platform could have on teens. Facebook has disputed the reporting based on the internal documents.