BOOK THIS SPACE FOR AD
ARTICLE ADAlgorithms clocked error rates of between 5% to 50% when comparing photos of people wearing digitally created masks with unmasked faces.
Face masks not only have shown in research to slow the spread of COVID-19, they also deter facial-recognition technology from correctly identifying people, according to a new study.
New research from the National Institute of Standards and Technology (NIST) found that even the best of 89 commercial facial recognition algorithms tested experienced error rates between 5 percent and 50 percent when matching people in digitally applied face masks with photos of the same person without a mask.
The study shows the limitations of facial-recognition technology in a post-pandemic world and are aimed at developing and testing technology that takes into consideration how to identify people who are masked, said Mei Ngan, a NIST computer scientist and the report’s author, in a press statement.
She said COVID-19 has given researchers developing facial-recognition technology a new factor to consider when creating accurate facial recognition algorithms.
“We have begun by focusing on how an algorithm developed before the pandemic might be affected by subjects wearing face masks,” she said. “Later this summer, we plan to test the accuracy of algorithms that were intentionally developed with masked faces in mind.”
NIST researchers used what’s called “one-to-one” matching to test the algorithms, which means they compared a photo of someone with a different photo of that same person. They digitally applied masks using nine mask variants in black or light blue to mimic the surgical-style masks most people wear in public.
Results of the tests demonstrated a few key challenges for developers of facial-recognition algorithms to consider. The key overall finding is that algorithm accuracy when applied to masked faces declines considerably, researchers said.
Even the most accurate algorithms failed to authenticate a person about 0.3% of the time, while top face-recognition technology had a failure rate of about 5 percent, according to the study. Moreover, many algorithms considered competent by researchers failed at a much higher rate, between 20 percent to 50 percent of the time.
Masks also prevented the algorithms from even performing their appointed task at all, registering what’s called a “failure to enroll or template,” researchers said. This basically means the technology could not effectively measure a face’s features — such their size and distance from one another, for example — and then compare these measurements to those from another photo, which is how they work.
Predictably, masks that cover more of a person’s face made it harder for the algorithms to work, researchers found. The more a person’s nose was covered, the lower the accuracy of the technology was.
The shape and color of a mask also were factors when it came to accuracy. Round masks produced lower error rates, while black masks gave the algorithms more difficulty in recognizing faces than blue ones, researchers said.
False negative readings—in which an algorithm fails to match two photos of the same person–also increased when photo subjects wore masks, according to the study. False positives—when an algorithm incorrectly indicates a match between photos of two different people–were less affected, however, with results were generally the same as in unmasked scenarios or slightly declined.
The fact that facial recognition is less accurate when people are masked might be good news for some, as the technology itself has been controversial and even shown evidence of wild inaccuracy in the best of cases.
Facial recognition has come under heavy fire from privacy advocates, particularly in its use by law enforcement, which has resulted in alleged racial profiling and errors that misidentified politicians as criminals. These scenarios—which involved Amazon’s Recognition platform–even inspired some Democratic lawmakers to propose a nationwide ban on the technology.
Ngan was quick to say that the results of the NIST study are not meant to be conclusive, but merely guidelines for developing technology specifically designed to recognize faces while people are wearing masks
“We can draw a few broad conclusions from the results, but there are caveats,” she said. “None of these algorithms were designed to handle face masks, and the masks we used are digital creations, not the real thing.”
Complimentary Threatpost Webinar: Want to learn more about Confidential Computing and how it can supercharge your cloud security? This webinar “Cloud Security Audit: A Confidential Computing Roundtable” brings top cloud-security experts together to explore how Confidential Computing is a game changer for securing dynamic cloud data and preventing IP exposure. Join us Wednesday Aug. 12 at 2pm ET for this FREE live webinar.