It’s also often wrong
Even for caucasian men, the technology often fails to work. In South Wales, police testing a facial recognition system saw 91% of matches labelled as false positives, as the system made 2,451 incorrect identifications and only 234 correct ones when matching a face to a name on the watchlist. Similarly In New York City the transport authority halted a pilot that had a 100% error rate. Facial recognition tools work a lot better in the lab than they do in the real world. Small factors such as light, shade and how the image is captured can affect the result.
There’s the potential for fraud
Companies selling facial recognition software have compiled huge databases to power their algorithms – controversial Clearview AI, for example, has 3 billion images (scraped from Google, Facebook, YouTube, LinkedIn and Venmo) it can search against. But these systems are a real security risk. Hackers have broken into databases containing facial scans used by banks, police departments and defence firms in the past. Criminals can use this information to commit identity fraud, harass or stalk victims. Biometric data is not something you’d want to fall into the wrong hands, US Senator Edward Markey said after Clearview had a data breach in 2020. “If your password gets breached, you can change your password. If your credit card number gets breached, you can cancel your card. But you can’t change biometric information like your facial characteristics.”
It’s being used to monitor children
Using our faces to unlock our iPhones or computers may seem harmless but this technology is increasingly being used to also capture images of children. In China, the gaming giant Tencent is using facial recognition to stop children playing games between 10pm and 8am. Between these times, players will have to give a facial scan to prove they’re an adult. In America, one Texas school district ran a pilot using surveillance technology in its school corridors. Over seven days there were 164,000 facial detections including one student who was detected 1,100 times. And in Argentina, police forces are using it to track alleged offenders as young as four.
It’s insufficiently unregulated
The use of facial recognition tools is already governed by the GDPR in the EU and the UK, but technology companies themselves are calling for stronger regulation. IBM, Microsoft and Amazon have all either pulled out of the facial recognition software market altogether, or are limiting their work with police forces in the US. In 2021, the European Union’s lead data protection supervisor called for remote biometric surveillance in public places to be banned, and to stop AI being used to predict people’s ethnicity, gender, political or sexual orientation.
It’s not just monitoring your face
The way you look, how you think and feel – firms are developing new and alarming ways to track everything we do. Irish startup Liopa, for example, is trialling a phone application that can interpret phrases mouthed by people, and VisionLabs, which is based in Amsterdam, claims it is able to tell if someone is showing anger, disgust, fear, happiness, surprise or sadness. Such technology is increasingly being used by companies to track productivity and even make hiring decisions. And there’s other biometric technology that can track how you move, identification based on the shape of your ear, and iris matching.
Your private life is no longer private
Would you act differently if you knew you were being watched? What if the watcher can find out who you are? Freedom of speech campaigners say the use of facial recognition software has real implications for fundamental human rights, including the right to protest, and the right to a private life. More than a third (38%) of 16-24 year olds polled in London and 28% of ethnic minority people said they would stay away from events monitored with live facial recognition. In 2020, a court also found in favour of British man Ed Bridges who argued the use of automatic facial recognition technology caused him distress. Bridges was one of 500,000 faces collected by the South Wales police force in Cardiff, the majority of whom were not suspected of any wrongdoing.