Another country has joined the increasing ranks of those saying that the content-filtering systems of Facebook are not good enough.
Sri Lanka is faced with online content that it says has driven deadly sectarian violence. This week, the nation banned the social media and messaging services of Facebook in the country.
By forcing internet service providers to pull the plug on WhatsApp, Instagram, and Facebook, the government in Sri Lanka hopes to stem the spread of fake news and hate speech that it blames for attacks on the Muslim minority of the country.
The said ban came after numerous years in which critics have said that Facebook and the government of Sri Lanka were not doing enough to stop the spread of such harmful posts.
The said move is also a reminder of why clearing social media sites of dangerous content may be the toughest in the own home country of Facebook.
Since the Constitution of the United States protects all speech except for threats which attempt to influence imminent violence, the US government cannot arbitrarily order YouTube (a unit of Alphabet), Twitter, Facebook, or other social media sites off the internet.
In a statement that was emailed to CNBC, Facebook stated:
“We have clear rules against hate speech and incitement to violence and work hard to keep it off our platform. We are responding to the situation in Sri Lanka and are in contact with the government and non-governmental organisations to support efforts to identify and remove such content.”
A source who familiar with the thinking of the company regarding the situation in Sri Lanka informed CNBC that the company believes that restricting the access to the internet can deprive the people of a significant communication tool during a time of crisis and hopes that the access will be restored soon in the country.
The said ban in Sri Lanka came during the same week that the new coalition government of Germany says that it may revise a recently-enacted law to punish internet companies that do not remove hate speech immediately.
The said law is seen as a test-case in the effort of Europe to rein in harmful content in social media. Critics say that it has caused some speech to be unfairly removed.