Should tech companies delete ISIS videos?
Loading...
The terrorist attack on London Bridge over the weekend has reignited a debate about tech companies鈥 level of responsibility in preventing terrorism. Hours after Saturday鈥檚 attack, British Prime Minister Theresa May called for a regulatory crackdown on online content and criticized the tech industry for giving extremist ideology 鈥渢he safe space it needs to breed.鈥
London Mayor Sadiq Khan echoed that call in a statement Monday. 鈥淎fter every terrorist attack we rightly say that the聽internet聽providers and social media companies need to act and restrict access to these poisonous materials,鈥 he said. 鈥淏ut it has not happened ... now it simply must happen.鈥
But analysts say pushing technology companies to remove extremist content may not be the straightforward solution it seems.
There are expected censorship concerns, but, it鈥檚 not as simple as free speech versus security. Some say removing content might not be effective in disconnecting Islamic State (ISIS) recruiters from potential recruits, and may even make it more challenging for intelligence agencies to monitor terrorist plots online. Others suggest focusing on online content is a distraction, and efforts should instead try to prevent those susceptible to extremist messages from seeking them out online in the first place.聽
These calls come as reports surfaced that one of the three attackers responsible for Saturday鈥檚 terrorist attack may have been radicalized by extremist sermons on YouTube.
ISIS videos and other materials have also surfaced online in the past year that highlight how to maximize damage with a vehicle and knife attack 鈥 a script that is eerily similar to the London Bridge attack that left seven dead and 48 injured.
The line between stifling speech and thwarting terrorism
The open nature of the internet has long been criticized by regulatory advocates as offering terrorists a free forum to circulate extremist content.聽By聽, as many as 90聽percent聽of terrorist attacks in the past four years have had an online component to them. But those opposed to a regulatory approach cite concerns that cracking down on questionable content risks casting too broad a brush, censoring legitimate content.聽
When it comes to extremist content, treading that line is tricky. Unlike some content, such as child pornography, holding extreme views isn鈥檛 illegal 鈥 and neither is broadcasting them in the United States. As such, it takes a value聽judgment聽to decide which content to remove.
An algorithm can鈥檛 pick up on the necessary nuances to find the line between over-censorship and dangerous extremist content, says Aram Sinnreich, professor of communications at American University in Washington. 鈥淭here are no paths that preserve anything remotely approaching an open internet, and at the same time preventing ISIS from posting recruitment videos.鈥
Many large tech companies have tried to compromise by employing an army of human workers to review content flagged by users as problematic. The reviewers use the tech company鈥檚 terms of use as guidance, but in the case of extremist content, it鈥檚 not always black and white.
But Hany Farid, senior adviser to the nonprofit Counter Extremism Project, says it is possible for an algorithm to find the sweet spot, as long as humans work with it. A computer science professor at Dartmouth College, Dr. Farid helped develop the tool now used by most internet companies to identify and remove child pornography. He has also developed a more sophisticated tool that he says can be harnessed to聽.
Farid says聽internet聽companies鈥 concerns about crossing the line into censorship are unfounded.
鈥淚鈥檓 not buying the story鈥 that it鈥檚 too difficult or there鈥檚 a slippery slope leading to more censorship, Farid says. 鈥淭hat鈥檚 a smokescreen, saying there鈥檚 a聽gray聽area. Of聽course聽there is. But it doesn鈥檛 mean we don鈥檛 do anything. You deal with the black and white cases, and deal with the聽gray聽cases when you have to.鈥
Tech companies have gone through 鈥渁n evolution of thinking鈥 recently and are now more proactively removing content on their own, says Seamus Hughes, deputy director of the Program on Extremism at George Washington University. He points to the 2013 Boston Marathon bombing as a turning point. Investigators found clues that the attackers may have learned how to make a bomb from Inspire magazine, an online, English-language publication reportedly by the organization Al Qaeda.
鈥淚t became so there was less of a level of acceptance for general propaganda to be floating out there,鈥 Mr. Hughes says.
In one initiative launched last year, the tech giants are聽teaming up聽to make it easier to spot terrorism-related content. Facebook, Microsoft, Twitter, and YouTube have developed channels to share information about such extremist content and accounts so that individual companies can find and take it down more quickly.
Whack-a-mole concerns聽
Still, some say that removing content might not actually be an effective approach to stem radicalization and recruitment by terrorist organizations.
One concern is that extremist content will simply move to other platforms.
鈥淚t鈥檚 sort of a聽whack-a-mole聽kind of problem,鈥 says Eric Rosand, senior fellow in the Project on US Relations with the Islamic World at the Brookings Institution and director of The Prevention Project: Organizing Against Violent Extremism in Washington, D.C. 鈥淭errorists will find another way to reach out with propaganda鈥 if it鈥檚 removed.
That could mean moving onto smaller platforms with more encryption and less bandwidth to review and remove content.
This content could also be moved to the dark web, a section of the internet that is dense with encryption and challenging for intelligence officials to track. Sure, there鈥檚 a limited audience in the dark web, a detail which could reduce recruitment for organizations like ISIS, Hughes says, but those who do make it into the depths of the dark web are particularly dedicated.
And then there鈥檚 the question of where intelligence agencies can best keep tabs on extremists, Hughes says. 鈥淚s it better for these guys to be on the systems where we know we can [collect information on] them, we know who everyone is, but they can reach more people? Or is it better to push them off to the margins so they鈥檙e only talking聽to聽who they already were going to talk聽to聽to聽begin with?鈥
Counter-messaging
Some tech companies and government officials have been weighing alternative options to counteract extremist content. One idea is to harness the tools of the internet and social media to reach people in danger of being radicalized 鈥 in other words, use the same tools as ISIS in a sort of counter-messaging effort.
Google鈥檚 2015 pilot project, the 鈥Redirect Method,鈥澛爐ried to target the audience most susceptible to online recruitment and radicalization and, when they searched for certain terms, directed them toward existing YouTube videos that counter terrorists鈥 messages. The project used similar principles that businesses use to target ads to certain consumers.
Similarly, officials in the State Department鈥檚 Global Engagement Center have used paid ads on Facebook as a means of reaching out to young Muslims who may be targeted by extremist recruiters. The ads are for videos and messaging that聽.
But online content might not be as responsible for radicalizing terrorists as some politicians are implying, says Dr. Rosand of Brookings. 鈥淚t鈥檚 as much about the offline networks, it鈥檚 as much about the grievances that drove them to violence, or made them very susceptible to violent messages, as they become radicalized.鈥
He suggests that politicians instead encourage tech companies to invest in communities by providing other alternatives to the path of terrorism. 鈥淗ow do you give them options, other than going online, to search for meaning in their lives? We don鈥檛 invest enough in that.鈥