Facebook Apologizes for Moderation ‘Mistakes’

Photo by Brian Solis/Flickr

With more than 2 billion users and counting, controlling Facebook is an increasingly challenging job. The unenviable task depends on the shoulders of a 7,500-strong team of content moderators (alongside the algorithms of the site), who scour through countless unsightly posts that range from images of child abuse to violent terrorist materials. Unsurprisingly, they do not always get things right (in part because of the ambiguous guidelines of Facebook). And so, yet another report regarding hateful material slipping through the cracks of the site has surfaced, this time from ProPublica.

The non-profit organisation sent Facebook a sample of 49 items that contain hate speech, and some with legitimate expression (from its pool of crowdsourced posts that total to 900) and the social media giant admitted that its reviewers made mistakes in 22 of the said cases. In six cases, Facebook blamed the users for not flagging the said posts correctly, and in two more incidents, it said that it did not have enough information to respond. The company, however, defended 19 of its decisions. The posts included racist, sexist, and anti-muslim rhetoric.

In a statement, Justin Osofsky, the Facebook VP, said: “We’re sorry for the mistakes we have made. We must do better.” The social media executive said that the social media network would increase its safety and security team to have a total of 20,000 people in 2018 in an attempt to improve its implementation its community standards. He added that Facebook deletes about 66,000 posts that are reported as hate speech every week.

On top of the company’s fight against misinformation, Facebook has also been adding new tools in order to combat sensitive material. In April, the company introduced a reporting mechanism to prevent revenge porn, and earlier this month, it launched features to help users block or ignore harassers.