A researcher has created a "two-factor authentication" system for facial recognition that uses face gestures, such as a wink or lip movements, to unlock a device. BYU professor D.J. Lee touts his new identity verification algorithm as more secure than current facial biometrics. The system — known as Concurrent Two-Factor Identity Verification (C2FIV) — requires you to record a short video of an action using your face, which can also include reading a unique phrase. You then upload the clip to your device for input and authorization, with the system then requiring both your face and the gesture for verification.
Lee claims that bad actors can bypass biometrics such as fingerprint sensors and retina scans to hack into your phone by using masks or photos, and by simply holding your phone up to your face while you're sleeping. “The biggest problem we are trying to solve is to make sure the identity verification process is intentional,” the computer and electrical engineering professor said in a statement. "You see this a lot in the movies — think of Ethan Hunt in Mission Impossible even using masks to replicate someone else’s face.”
Yet, while an extra layer of device security is always useful, the fact is that most modern face unlock systems can't be fooled by masks or photos. Device makers have also learned from prior lapses — like the Google Pixel 4's facial recognition flaw that allowed access even if a subject's eyes were closed — to make more watertight tools. Apple's Face ID, for instance, relies on the company's TrueDepth camera to map your face using over 30,000 invisible dots. Apple claims this info isn't found in print or 2D digital photos and can protect against spoofing by masks or other techniques.
That's not to say the new system is without its merits. It could be ideal for sensitive situations where extra security is a must, including for government and corporate devices or entry systems. Lee — who has filed a patent on the tech — also envisions use cases such as online banking, ATMs, safe deposit box access and keyless car entry. The C2FIV system relies on an integrated neural network framework to learn facial features and actions concurrently. In a preliminary study, Lee trained the algorithm on a dataset of 8,000 video clips from 50 subjects making facial actions such as blinking, dropping their jaw, smiling or raising their eyebrows.
“We could build this very tiny device with a camera on it and this device could be deployed easily at so many different locations,” Lee explained. “How great would it be to know that even if you lost your car key, no one can steal your vehicle because they don’t know your secret facial action?”