In collaboration with the University of Modena and Reggio, Facebook’s Artificial Intelligence Research team is launching two new technologies today. Known as PDQ and TMK+PDQF, they will now help to detect any identical or nearly identical photos and videos. This will help the company to trace and stop the sharing of inappropriate content on the platform. As the online space is already struggling with socially abusive content, this technology will help them to at least curb the spread of this kind of content. It will also help prevent child exploitation which is carried out in the form of illegal photos and videos. These technologies will be open-sourced by Facebook so that small developer or companies can use them for free. It will be open-sourced on GitHub.

PDQ and TMK+PDQF will detect harmful content. It will easily identify abusive content, shared hashes or digital fingerprints of different types of harmful content on the platform.

What are PDQ and TMK+PDQF and how is it useful

PDQ and TMK+PDQF will detect harmful content. It will easily identify abusive content, shared hashes or digital fingerprints of different types of harmful content on the platform. They can create an efficient way to store files as short digital hashes that can determine whether two files are the same or similar. Even without the original photo or video record, it can detect the duplicate material. Meanwhile, other companies which are already using such technology can add an extra layer of protection with this. Other algorithms with the same implementation are — pHash, Microsoft’s PhotoDNA, aHash, and dHash.

This technology will help to prevent harmful content like child exploitation, terrorist propaganda, or graphic violence. It will identify duplicates and prevent them from sharing. Nonetheless, the digital space especially social media platform is so menacing, it is necessary to protect the users of all age.

Read Also: It’s High Time we Raise Online Safety Standards

Facebook’s statement

“We designed these technologies based on our experience of detecting abuse across billions of posts on Facebook. We hope that by contributing back to the community we’ll enable more companies to keep their services safe. It will empower non-profits that work in the space,” stated by Facebook. Further adding to this it says, “At Facebook, we rely on a combination of technology and people to help keep our platforms safe. We designed these technologies based on our experience detecting abuse across billions of posts on Facebook. We will continue to expand and improve our own products and features to find harmful content.”

“In just one year, we witnessed a 541% increase in the number of child sexual abuse videos reported by the tech industry to the CyberTipline. We’re confident that Facebook’s generous contribution of this open-source technology will ultimately lead to the identification and rescue of more child sexual abuse victims.”

John Clark, President and CEO of the National Center for Missing and Exploited Children (NCMEC) said, “In just one year, we witnessed a 541% increase in the number of child sexual abuse videos reported by the tech industry to the CyberTipline. We’re confident that Facebook’s generous contribution of this open-source technology will ultimately lead to the identification and rescue of more child sexual abuse victims.”

Simultaneously, Facebook is working on new research with educational institutes like the University of Maryland, Cornell University and Massachusettes Institute of Technology. One of the key projects is a new technique to detect intentional adversarial manipulations of videos and photos to circumvent our systems.

Read Also: Study: Facebook, Twitter Help Restore Sense Of Well-Being

Divya Tripathi is an intern with SheThepeople.TV

Get the best of SheThePeople delivered to your inbox - subscribe to Our Power Breakfast Newsletter. Follow us on Twitter , Instagram , Facebook and on YouTube, and stay in the know of women who are standing up, speaking out, and leading change.