BOOK THIS SPACE FOR AD
ARTICLE ADPolice in multiple major US cities have figured out a trick to circumvent their cities' bans on facial recognition technology. Just ask a friend in a city without any such restrictions to do it for you.
It's not immediately clear how widespread such side-stepping of facial recognition restrictions in the US may be. According to the Washington Post's review of police documents in Austin, Texas, and San Francisco, however, it's definitely something that's gone on in at least two major US metropolitan areas.
WaPo reported over the weekend that SFPD and APD have both sought help from neighboring cop shops on occasion, though with different levels of success. San Francisco cops have reportedly sought facial recognition searches from other agencies on five occasions since 2019, but never got a match back; Austin police, on the other hand, have sent at least 13 searches since 2020.
"Some" of the searches from APD returned hits, and multiple suspects in Austin have been arrested and charged as a result of the searches, WaPo said.
The Register hasn't heard back from either department, but spokespeople for both the SFPD and APD told The Post that the face recognition searches were done without department authorization.
SFPD also noted that it hasn't faced any consequences from the city, but declined to share details about officer discipline. APD officials said the department would look into violations of city ordinances.
Clearview, whose facial recognition tool is used widely by law enforcement throughout the US, including in the Austin-adjacent city of Leander, where APD turned for face recognition searches, explicitly prohibits the sharing of facial recognition data with "individuals from other government agencies." The Register asked Clearview to comment, but didn't hear back.
Surely that evidence wouldn't be admissible?
Chesa Boudin, a former San Francisco district attorney, spoke to WaPo on the matter, saying that many officers deny using facial recognition despite evidence to the contrary. Such bad behavior has led to prosecutors being wary of filing charges in a case that could have used it.
Boudin even said that he'd seen evidence of SFPD using "be on the lookout" flyers distributed between departments as plausible deniability for sharing photos that could be run through facial recognition software.
Regardless of justification, a case predicated on the use of such technology could presumably be tossed from court – especially if it became clear from records that police sought to use it in direct violation of local rules.
TSA wants to expand facial recognition to hundreds of airports within next decade Facial recognition tech has outpaced US law – and don't expect the Feds to catch up Deepfake attacks can easily trick live facial recognition systems online Microsoft doesn't want cops using Azure AI for facial recognitionFacial recognition also has a known bias issue (one of the major reasons advocates have pushed for a ban) that only makes the matter of its improper application a bigger concern for privacy and racial justice advocates.
Even in instances where police have taken facial recognition systems offline due to bias – like in London, England – and have returned them to service after improvements, said improvements have been limited. The facial recognition tech of the English capital's police force, the Metropolitan Police, came back with a false positive rate of around one in 6,000 – quite high considering the number of Londoners who may walk in front of a camera on any given day.
Facial recognition bans exist in around two dozen jurisdictions in the US. Around half of federal agencies with law enforcement responsibilities use some form of facial recognition technology.
It's not immediately clear what share of police departments in the US use facial recognition, but a look at a map of usage and bans around the US shows that hundreds of departments are using it, and countless more have partnerships with Amazon Ring to provide footage for law enforcement and public safety purposes. ®