People trying to search for extremist-related data in Google’s search engine will be diverted to anti-radicalization sites instead, a senior executive for the tech giant has announced.
This is in the framework of two pilot programs aimed at curbing Islamic State (IS, formerly ISIS/ISIL) influence online.
The other program will focus on easier identification of extremist videos, Anthony House, Google’s senior manager for public policy and communications, said at a counter-extremism hearing in the UK parliament. Facebook and Twitter representatives were also present at the hearing.
“We should get the bad stuff down, but it’s also extremely important that people are able to find good information, that when people are feeling isolated, that when they go online, they find a community of hope, not a community of harm,” House emphasized, according to the Guardian.
He added that 14 million videos were taken down from YouTube in 2014 for a number of reasons, including terrorist content. The company also noted some 100,000 ‘flags’ – signals that users find certain content inappropriate.
Facebook has turned into a “hostile place” for IS, according to Simon Milner, Facebook’s policy director for UK and Ireland, Middle East, Africa and Turkey.
“Keeping people safe is our number one priority. Isis is part of that, but it’s absolutely not the only extremist organization or behavior that we care about,” Milner said.
He added that radicalization doesn’t only happen online – it’s also connected to real-life contact.
All three social media giants have been grilled by MPs over how their companies’ could battle the use of Facebook, Google, and Twitter in IS propaganda.
In particular, MP committee chairman Keith Vaz asked the companies to provide the number of people who monitor the content – so-called ‘hit squads.’ Google and Facebook didn’t give a number, while Twitter executives said there were “more than 100” personnel involved in the task.
— RT (@RT_com) February 1, 2016
The three companies were also questioned about the threshold the companies apply to make the decision to “notify the law enforcement agencies.”
Google and Facebook answered the threshold was “life threat,” while the UK public policy manager at Twitter said that a threshold basically doesn’t exist.
“We don’t proactively notify law enforcement of potential terrorist material,” Nick Pickles said. “If we’re taking down tens of thousands of accounts, we’re not in a place to judge the credibility of each of those threats. We don’t want to swamp law enforcement with false threats.”