Facebook Stops Experiment that is Discreting Fake News With Comments

It is increasingly clear that some of the methods of Facebook for discrediting fake news are more effective than the others. The social site has ceased an experiment where it attempted to prioritize comments that accused stories of being fake, theoretically examining fake news and other sketchy articles. A spokesperson for Facebook speaking to the BBC did not explain the exact reasons for stopping the trial (it is described as a “small test which has now concluded”). However, some of the examples that are publicly available suggest that it was just too unsystematic.

Users included in the test noticed that the system was simply promoting comments that included certain keywords like “lie” or “fake,” regardless of what the comment was actually saying. It was not picky regarding the source stories, either, so users would observe these suspicious statements highlighted in articles that are trustworthy. How are you supposed to trust the judgment of Facebook if it is not scrutinizing the content of the stories themselves?

This is not the end of the experiments though. The spokesperson said that the company would continue to “keep working to find new ways” in order to fight misinformation online. Facebook knows that there is a lot of work that is left to do, in other words — it will take a while before it can reliably downplay the right stories.