IE 11 is not supported. For an optimal experience visit our site on another browser.

Facebook Outlines Its Strategy for Taking Down Extremist Content

Facebook's response comes as European officials call for new laws that would fine social media companies for not swiftly removing extremist content.
FILE PHOTO: The Facebook logo is displayed on the company's website in an illustration photo taken in Bordeaux, France
The Facebook logo.

Facebook has been quietly using an arsenal of artificially intelligent tools to help identify and remove extremist content before it can be seen by the larger community, according to the company.

Under growing pressure from officials in the United Kingdom and France, Facebook shared for the first time — publicly — an inside look at how it's tackling the threat of online extremism.

IMAGE: Tablet computer
A woman holds a tablet displaying the WhatsApp logo in front of the screen with the Facebook logo.

Related: Britain, France Propose Legal Liability for Websites That Don’t Remove Extremism

With nearly 2 billion people using Facebook and posting in 80 languages, monitoring every corner of Facebook is a monumental task. That's where artificial intelligence comes into play.

"We want to find terrorist content immediately, before people in our community have seen it. Already, the majority of accounts we remove for terrorism we find ourselves," the company's policy team said in a newsroom post.

Facebook said it has 150 people on its counterterrorism team, constantly fine-tuning tactics for taking down terrorist content.

The main targets of the "most cutting edge techniques" are ISIS, al Qaeda and their affiliates, although Facebook's team said it expects to "expand to other terrorist organizations in due course."

Those include the deployment of image matching technology — the same system Facebook is using to fight revenge porn on the social network.

When a user uploads an image or a video, Facebook's AI can check to see whether it matches "a known terrorism photo or video." If it does, it won't be uploaded.

Facebook is also experimenting with language understanding to help it better spot patterns and signals that may indicate something is an extremist text. Facebook is using text that has already been removed praising al Qaeda and ISIS.

Because this relies on machine learning, the idea is for Facebook's technology to get smarter the more it learns about the words, phrases and ways extremists post online.

Facebook's team also says it's making progress busting "clusters" of extremists, because many tend to congregate in one place.

When a page, a group, a post or a profile is identified as supporting terrorism, Facebook uses an algorithm to flush out pages or profiles that may be related.

"We use signals like whether an account is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account," the post said.

While artificial intelligence is driving the charge to weed out extremist content, Facebook acknowledged that human help is crucial.

Mark Zuckerberg, Facebook's co-founder and chief executive, announced last month that Facebook plans to hire 3,000 more people — in addition to 4,500 existing moderators — to help ride the site of harmful content.

"We want Facebook to be a hostile place for terrorists," the Facebook post said. "The challenge for online communities is the same as it is for real world communities — to get better at spotting the early signals before it's too late."

close