© 2024 New Hampshire Public Radio

Persons with disabilities who need assistance accessing NHPR's FCC public files, please contact us at publicfile@nhpr.org.
Play Live Radio
Next Up:
0:00 0:00
Available On Air Stations
Purchase your Summer Raffle tickets now and you'll be entered into Tuesday's prize of the final $2,000 in gas or electric vehicle charging + car or cash & more!

Tech Companies Create Shared Database To Track, Remove 'Violent Terrorist Imagery'

Facebook headquarters in Menlo Park, Calif., as seen in 2015.
Alison Yin
Invision for Nintendo/Facebook
Facebook headquarters in Menlo Park, Calif., as seen in 2015.

Facebook, Microsoft, Twitter and YouTube say they are creating a database to keep track of terrorist recruitment videos and other terror-related images that have been removed from their services.

In a joint statement posted by Facebook on Monday, the company said:

"Starting today, we commit to the creation of a shared industry database of 'hashes' — unique digital 'fingerprints' — for violent terrorist imagery or terrorist recruitment videos or images that we have removed from our services.

"By sharing this information with each other, we may use the shared hashes to help identify potential terrorist content on our respective hosted consumer platforms. We hope this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online."

The collaboration will allow companies to cross reference videos and pictures that show up on their services with a database of those that have already been removed from other services.

For example, if a video glorifying violence in the name of the Islamic State has been removed from Twitter, it will be assigned a unique "fingerprint" which will be stored in the database. If the same video shows up on Facebook, Facebook will be able to match the video and, if it chooses, to remove it.

The choice about whether to remove such content falls to the companies, although videos and images related to terrorist or extremist organizations can also be pertinent to law enforcement.

As we have reported, identifying what is, or is not, terrorist imagery can be challenging for tech companies. For example, last year Facebook removed (at least temporarily) a parody photo posted by journalist Sultan Al-Qassemi depicting a rainbow pride filter over a picture of the leader of ISIS. Another Facebook user had taken it literally, and flagged it for containing "graphic violence."

It was unclear whether the photo was permanently removed by the company.

Twitter said earlier this year that it had "increased the size of the teams that review reports," and suspended more than 125,000 accounts since mid-2015 — that number has since grown to about 360,000 — for what it called connections to terrorist or extremist groups.

But NPR's Aarti Shahani reported that Twitter noted there's no "magic algorithm" to identify terrorist content on the Internet, and that it was forced to decide who to ban based on "very limited information and guidance."

"We should have government oversight for a database like this," Jillian York of the Electronic Frontier Foundation told NPR. York's organization studies and advocates for First Amendment rights on the Internet.

"I think that if there's illegal content, then it's illegal already. We have laws to eliminate it," she said. However, she recognizes that "doing this privately is a way that companies privately can respond to their shareholders and to the public interest."

The "hashing" technology that companies use to create a unique fingerprint for a given image or video has also been used to keep track of and take down child pornography across Internet platforms. Companies including Microsoft and Twitter use it to prevent such images from proliferating on their websites.

Last year one of the developers of the technology, Hany Farid, told NPR the technology needed to be used in conjunction with real people, who can pick out nuances and make judgment calls.

"At the end of the day, somebody has to be looking at the content," Farid said. "There has to be a human in the loop because there are subtleties and nuances to the way this technology is being used."

Tech companies do not disclose the exact methods by which they identify and decide to remove content from their sites. As we have reported, Google, Facebook, Microsoft and Twitter all employ people to review content that has been flagged by users or by algorithms for violating the terms of service.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Rebecca Hersher (she/her) is a reporter on NPR's Science Desk, where she reports on outbreaks, natural disasters, and environmental and health research. Since coming to NPR in 2011, she has covered the Ebola outbreak in West Africa, embedded with the Afghan army after the American combat mission ended, and reported on floods and hurricanes in the U.S. She's also reported on research about puppies. Before her work on the Science Desk, she was a producer for NPR's Weekend All Things Considered in Los Angeles.

You make NHPR possible.

NHPR is nonprofit and independent. We rely on readers like you to support the local, national, and international coverage on this website. Your support makes this news available to everyone.

Give today. A monthly donation of $5 makes a real difference.