A new study has found major flaws in a live facial recognition trial by police in the United Kingdom. Researchers said the trials violated human rights, muddled public consent, and were inaccurate in 81 percent of cases.

The Met police began with a set of ten trials between 2016 and 2019, testing the technology in public spaces like sports matches, festivals, and transit hubs. The department endeavored to educate the public before the trials, through an informative website, and by hanging signage and handing out fliers in proximity to the trial areas. But despite these efforts, an independent study by the University of Essex found that the trial of live facial recognition on members of the British public violated the European Convention on Human Rights (ECHR), and likely would be held unlawful in court. 

The EU Convention on Human Rights requires that any interference with people’s human rights must be “necessary in a democratic society.” This is known as the necessity test. The University of Essex study found that none of the risk assessments done prior to the trials were sufficient in proving that live face surveillance  technology met this human rights test, largely due to the police department’s use of an overly broad watchlist that included not only people who had been convicted of a crime, but also anyone who had ever been detained by police. According to the researchers, the Met police failed to demonstrate that a surveillance program of this magnitude passed the necessity test.

The system was also found to be wildly inaccurate. Their facial recognition software, contracted from NEC in 2018, found 42 eilible “matches” to the watchlist over the six trials that researchers observed. Of these 42, only eight turned out to be the person in question, meaning that the system falsely flagged 4 in 5 people as suspects. These false matches resulted in real intimidation of innocent people. In one troubling case, the system misidentified a 14-year-old boy, and before the control room verified if he was a match, five officers led him by the wrists to a side street for questioning. Researchers described the boy as “visibly distressed and clearly intimidated,” and said the incident was an example of how the technology encourages physical interventions. 

The flaws of facial surveillance are especially telling coming from the United Kingdom, one of the most heavily surveilled nations on earth. The UK has several regulatory agencies tasked with governing surveillance, including the Office of the Biometrics Commissioner, the Information Commissioner’s Office, and the Surveillance Camera Commissioner. It has numerous laws controlling the use of surveillance data, including the Protection of Freedoms Act and the more recent Data Protection Act of 2018. 

But existing regulatory frameworks in the United Kingdom are insufficient  to govern a new and relatively understudied technology like face surveillance, according to the University of Essex study. Facial recognition in the UK is a form of “overt surveillance” since it is used in cameras that are noticeable and visible to the public. This kind of surveillance requires informed public consent, as set forth in the Surveillance Camera Code of Practice. The researchers found that the Met police failed to meet these informed consent standards, concluding that people were “required to make an informed decision in a very short time-frame, a factor exacerbated by the limits on prior knowledge amongst the public.” 

Press pause on face surveillance in Massachusetts

Making matters worse, the researchers could identify no specific laws to authorize the use of facial recognition, leaving police with no explicit legal precedent for their trials. Face surveillance poses unprecedented threats to personal privacy and civil liberties, and the concerns surrounding the live face surveillance trial in the UK highlight the need for laws specific to face surveillance. Folding the technology into existing surveillance law is simply not enough.

Though the Met police use of face recognition was only a trial, it has launched an intense national debate about the public use of this technology. While the Home Office supports the police trials, Members of Parliament in the Science and Technology Committee have recently stepped in and called for a moratorium on live facial recognition until proper regulations are in place, emphasizing that “a legislative framework on the use of these technologies is urgently needed”. 

In the United States, government agencies have taken a more dangerous approach than open trials, largely rolling out the technology in secret, and without any of the privacy specific regulatory infrastructure present in  the UK. But we at the ACLU of Massachusetts are fighting back, working to ban the use of the technology in municipalities and to pass a statewide moratorium on government use.  

We are at a critical moment, before the technology has come into widespread use by government agencies in Massachusetts. Every time face recognition is subjected to scrutiny, as it was in this case in the UK, we find more and more reasons to be concerned. We must press pause now, before it’s too late.

This blog post was written by Technology for Liberty Program intern Sarah Powazek.