Now that Facebook has forwarded Russia-linked ads to the Congress, it is outlining what it will do to prevent such a suspicious ad campaign from occurring in the future. To begin with, the tech giant promising to make ads more transparent — it’s writing tools that will allow you to see all the ads a Page runs, not just the ones that are targeting you. In theory, this could assist concerned people to spot suspicious advertising without asking for help from Facebook or third parties. Most of the efforts of Facebook, however, centre around toughening the process for ad review and the standards that guide them.
Facebook is hiring 1,000 more people for its global ads review teams in the space of 2018, and is “investing more” in machine learning to assist with automated flagging for ads. Advertisers will require “more thorough” documentation if they are running ads that are related to US federal elections, such as validating the organisation that they work with. The social network is also toughening its policies to prevent ads that are promoting “more subtle expressions of violence,” which might involve some of the ads stoking social tensions.
The social network is aware that it is not alone in fighting with Russia-backed campaigns, for that matter. It’s “reaching out” to industry and government leaders to both share information and help set better standards so that this would not happen elsewhere.
Th moves of Facebook look like they could be able to catch dodgy ad campaigns, particularly those that are attempting election influence campaigns. However, this is part of an all too common pattern at Facebook: the company implements broad changes (usually including staff hires) after failing to anticipate the social consequences of a feature. While it would be hard for a tech company to predict every possible misuse of its services, this suggests that Facebook needs to largely consider the pitfalls of a certain feature before it reaches the public, rather than waiting for a crisis.