Advertisement

Facebook releases tools to flag harmful content on GitHub

It hopes sharing its algorithms will safeguard children and remove more harmful content.

Facebook wants to rid the internet of garbage. But it can't do that alone. So today, it's making two of its photo- and video-flagging technologies open-source and available on GitHub. It hopes the algorithms will help others find and remove harmful content -- like child exploitation, terrorist propaganda and graphic violence.

Currently, when Facebook finds offensive photos and videos, it removes them and its algorithms assign a hash, or a digital fingerprint. Its technology can then use those hashes to determine whether two files are the same or similar, even without the original image or video. So when multiple copies of, say, terrorist videos appear online, Facebook has a better chance of spotting them.

These algorithms, known as PDQ and TMK+PDQF, will now be available to Facebook's industry partners, smaller developers and nonprofits. The first, PDQ is a photo-matching tool that was inspired by pHash but built from the ground up. The second, TMK+PDQF is the video-matching equivalent, and it was developed by Facebook's AI Research team and the University of Modena and Reggio Emilia in Italy. For those who already use content matching technology, Facebook says PDQ and TMK+PDQF can offer another layer of defense and allow different hash-sharing systems to talk to each other.

Facebook announced the open-source tools as part of its Child Safety Hackathon, and it's specifically hoping the technology might help safeguard children. It could be used in conjunction with Microsoft's cloud-based PhotoDNA tool and Google's Content Safety API -- both of which were released with the goal of protecting kids. After the discovery of an alleged child pornography ring on YouTube earlier this year, tools like these may be more important than ever.