A new tool developed by researchers from the USC Information Sciences Institute (USC ISI) may prove to be a major help in the ongoing war against deepfakes. The tool focuses on subtle face and head movements as well as artifacts in files to determine if a video has been faked, and can allegedly identify the computer-generated videos with up to 96 percent accuracy, according to a paper published by the Computer Vision Foundation.
Standard deepfake detection models analyze videos frame-by-frame to spot any sign of manipulation. The new technique created by USC researchers requires far less computing power and time. It reviews an entire video all at once, which allows it to process information much quicker. It stacks frames of the video on top of one another and looks for any potential inconsistencies in how the subject of the footage moves. It could be a slight tick in how the person's eyelids move or an odd movement during a gesture -- things the researchers refer to as "softbiometric signatures." Because most deepfake algorithms don't fully model a person's movements in this way, they can be dead giveaways.
The researchers used a data set of about 1,000 manipulated videos to train its tool, and it became quite adept at identifying deepfakes of major political figures and celebrities. That could provide to be quite valuable in the lead up to the 2020 presidential election. Even just simple video edits like one that appeared to show Nancy Pelosi slurring her speech have gone viral on social media. Stopping their spread before our feeds are flooded with misinformation is paramount.