Facebook claims it has drastically reduced hate speech prevalence

0
95
Facebook claims it has drastically reduced hate speech prevalence

Facebook has responded to the latest criticism of its platform, claiming in a lengthy new statement that over the previous three quarters, it has dramatically reduced the amount of hate speech its users have encountered. The corporation focuses on the prevalence of hate speech, which it defines as content that users view, rather than the total amount of problematic content on its platform.

Facebook argues that with this roughly 50-per cent decline in prevalence over its past several quarters, hate speech contributes to only around 0.05-percent of the content its users watch; this translates to around five viewers for everyone 10,000. Facebook claims to utilise various tools to detect problematic content and send it to reviewers for possible removal, among other things.

Also See:  opendns windows 10

The comment comes from Guy Rosen, Facebook’s VP of Integrity, particularly mentions the recent release of leaked content in a Wall Street Journal piece. Rosen said, among other things, in his post:

Data from leaked papers is being exploited to construct a narrative that our technology to combat hate speech is insufficient. We are purposefully lying about our progress. This isn’t correct. Like our users and advertisers, we don’t want to see hate on our platform, and we’re open about our efforts to eliminate it. These records show that our commitment to integrity is a multi-year process. Even though we will never be flawless, our teams are constantly working to improve our systems, uncover problems, and find solutions.

Rosen says that the prevalence of hate speech on Facebook is the most crucial metric in his opinion. He addresses the contentious practice of leaving hate speech on the site that doesn’t quite reach the “standard for removal,” emphasising that Facebook’s technologies instead reduce its distribution to users.

Also See:  Facebook and Spotify Are Working on Something Called Project Boombox

Rosen says:

When it comes to eliminating content automatically, we set a high bar. If we didn’t, we’d risk making more errors on stuff that appears to be hate speech but isn’t, endangering the same people we’re attempting to safeguard, like those who describe or condemn hate speech.

Source: about.fb | wsj