The government of the United Kingdom has created its own artificial intelligence tool as an aid in identifying online extremist content in its latest attempt to compel tech companies to tackle the said issue.
The technology is anticipated to help smaller companies with identifying and removing some contents that promote terrorism, something that the government of the United Kingdom has been critical of. Bigger tech companies including Facebook have already started making use of their own AI tools in order to remove terror content from its platforms.
However, in an interview with the BBC, Amber Rudd, the Home Secretary, said that she would not rule out requiring tech companies to make use of the technology by law.
The tool was developed with ASI Data Science, a company in the United Kingdom. It has a cost of £600.000, and it claims to be able to identify approximately 94 percent of the online activity that is produced by the Islamic State (IS), with an accuracy of 99 percent. People then need to evaluate the content and make a decision on whether to remove it.
On a visit to Silicon Valley to meet with some tech companies, Rudd stated: “We know that automatic technology like this, can heavily disrupt the terrorists’ actions, as well as prevent people from ever being exposed to these horrific images. This government has been taking the lead worldwide in making sure that vile terrorist content is stamped out.”
According to an analysis that was done by the Home Office, IS has been making use of over 400 platforms in 2017.
Yesterday, one of the biggest advertisers in the world, Unilever, heightened pressure on tech companies, warning that it would pull its advertising from their online platforms if they continue to fail to tackle issues such as a toxic culture and fake news.