Europe makes the case to ban biometric surveillance

Companies are racing to track your emotions, how you walk and your voiceprint. Should Europe ban biometric tracking entirely?
Image may contain Green Face Head Person Photography Portrait and Adult
Getty Images / WIRED

Your body is a data goldmine. From the way you look to how you think and feel, firms working in the burgeoning biometrics industry are developing new and alarming ways to track everything we do. And, in many cases, you may not even know you’re being tracked.

But the biometrics business is on a collision course with Europe’s leading data protection experts. Both the European Data Protection Supervisor, which acts as the EU’s independent data body, and the European Data Protection Board, which helps countries implement GDPR consistently, have called for a total ban on using AI to automatically recognise people.

“Deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places,” the heads of the two bodies, Andrea Jelinek and Wojciech Wiewiórowski, wrote in a joint statement at the end of June. AI shouldn’t be used in public spaces for facial recognition, gait recognition, fingerprints, DNA, voice, keystrokes and other types of biometrics, they said. There should also be a ban on trying to predict people’s ethnicity, gender, political or sexual orientation with AI.

But such calls fly in the face of the EU’s proposed regulations for AI. The rules, which were unveiled in April, say “remote biometric identification” is high-risk – meaning they’re allowed but face stricter controls than other uses of AI. Politicians across the EU will spend years debating the AI rules and biometric surveillance has already become one of the most contentious issues. When passed, the regulations will define how hundreds of millions of people are surveilled for decades to come. And the debate starts now.

Facial recognition has been controversial for years, but the real biometrics boom is taking aim at other parts of your body. Across the EU’s 27 member states, a number of companies have been developing and deploying biometric technologies that, in some cases, aim to predict people’s gender, ethnicity and recognise their emotions. In many cases the technology is already being used in the real world. However, using AI to make these classifications can be scientifically and ethically dubious. Such technologies risk invading people's privacy or automatically discriminating against people.

Take Herta Security and VisionLabs, for example. Both firms develop facial recognition technology for a variety of uses and say it could be deployed by law enforcement, retail and transport industries. Documents from Herta Security, which is based in Barcelona, claim its clients include police forces in Germany, Spain, Uruguay, Colombia, sports stadiums, shopping centres, hotel chains such as Marriott and Holiday Inn, airports, and casinos.

Critics point out that both Herta Security and VisionLabs claim parts of their systems can be used to track sensitive attributes. “A lot of the systems, even the ones that are being used to identify people, are relying on these potentially very harmful classifications and categorisations as the underlying logic,” says Ella Jakubowska, a policy advisor looking at biometrics at advocacy group European Digital Rights. The group is campaigning for a ban on biometric surveillance across Europe.

BioMarketing, Herta Security’s face analysis tool, is billed as a way for shops and advertisers to learn about their customers and can “extract” everything from a person’s age and gender to whether they wear glasses or not and even track their facial expressions. Herta Security says the technology is “ideal” for developing targeted advertising or helping companies understand who their customers are. The tool, Herta Security claims, can also classify people by “ethnicity”. Under GDPR, personal data that reveals “racial or ethnic origin” is considered sensitive, with strict controls in place around how it can be used.

Jakubowska says she challenged Herta Security’s CEO on the use of ethnicity last year and that since then the company has removed the claim from its marketing material. It remains unclear whether the feature has been removed from the tool itself. Company documents hosted by third parties still list ethnicity as one of the characteristics that can be found using BioMarketing. Company documents from 2013 referred to it detecting “race,” before it updated these to ethnicity. Herta Security, which has received more than €500,000 in EU funding and has been given an EU seal of excellence, did not respond to requests for comment.

VisionLabs, which is based in Amsterdam, says its “ethnicity estimator” aims to “determine a person’s ethnic group and/or race”. Its website claims it is able to “distinguish” people who are Indian, Asian, Black or white. But its analytics go deeper. It also says its “smile estimator” can predict “mouth occlusion” and the technology is also able to tell if a person is showing anger, disgust, fear, happiness, surprise, sadness or has a neutral expression. Gender, age, if people are paying attention to items, and dwell time are all listed as other metrics that can be used to help retailers understand who is in their shops.

AI experts and ethicists have warned against using biometrics to predict people’s gender or emotions. Studies have disputed whether AI can be used to detect emotions and comparisons have been made to inaccurate and widely-debunked polygraph tests. And, earlier this year, 175 civil liberties groups and activists signed an open letter calling for the ban of biometric surveillance.

Jakubowska says the use of such technologies is likely to be incompatible with the EU’s stance on human rights. “The very idea that we would have machines that are trying to guess our gender, and then make decisions that will impact our life, is really, really worrying,” she says. “It's only the tip of the iceberg in all the different ways that our bodies and behaviours are being degraded into data points and shoved into these faraway biometric databases that we often have no idea about.”

“The adoption and subsequent standards surrounding facial recognition technology is still in its infancy,” says a VisionLabs spokesperson. They add that it “encourage[s]” the debate around protecting people’s safety and that its use cases are not prohibited by GDPR.

But biometrics is big business – and its applications reach far beyond marketing. The technology stretches from facial recognition and identity verification, such as the iPhone’s FaceID and other fingerprint recognition technology, to experimental systems that try to work out if you’re lying based on the movement of your face muscles. It can also include what you look like, the pattern of your veins, how you move, your DNA, iris matching, identification based on the shape of your ear, finger geometry, and hand recognition. In short, biometrics can measure and quantify what makes you you.

On the one hand, using this technology can help make our lives more convenient and potentially reduce fraud. On the other, it can be seriously creepy and discriminatory. Bank cards are getting fingerprint scanners, airports are using facial recognition and biometrics to identify people, police in Greece are deploying live facial recognition, and in the UK police are reportedly experimenting with AI that can detect if people are distressed or angry.

By the mid-2020s it’s estimated that the global biometrics industry will be worth between $68.6 billion and $82.8bn – up from between $24bn and $36bn today. While China and the US lead the world in the creation of biometric technology, Europe’s market is growing fast. In the last decade, three EU research programmes have given more than €120 million to 196 groups for biometrics research. Major defence and security companies are developing biometric technologies, so are small startups.

Under the European Commission’s proposed AI regulations all biometric identification systems are considered high-risk. But it remains unclear whether this approach will keep people safe. The plans state that the creators of high-risk technologies will have to jump through a series of hoops before their technology can be used in the real world. These include using high quality data sets and telling regulators and ordinary people how they work. They will also need to complete risk assessments to make sure they have a “high level of robustness, security and accuracy”.

The European Association for Biometrics, a non-profit group that works with governments, NGOs the biometrics industry and others, says it “supports the respect of fundamental rights in the European Union” when it comes to developing new technologies. “If particular biometric applications, such as their use in public places, are not only endangering such rights, but make exercising fundamental rights and freedoms impossible, such as the right to move freely in public places without giving up the right to anonymity, such technology should not be used,” the organisation says. It adds there needs to be “precise guidelines and regulation” about what the technologies can and can’t be used for.

But while regulators debate these laws – including whether to ban biometrics entirely – the technology is creeping further into our day-to-day lives. Critics say the growth of the technology lacks transparency and that little is known about who is using it, when and how.

“We have very little information about who the companies are and the terms on which they’re collecting, processing, retaining, sharing or securing our most personal information,” says Alexandra Pardal, the executive director at Democratic Integrity, an investigative organisation that’s looking at the use of biometrics in Europe. “What we do know is that police forces, public authorities, private companies and individuals are collecting and using people’s biometrics.”

This lack of transparency appears to apply to the EU’s own funding of biometrics. Between September 2016 and August 2019, the EU’s innovation fund, Horizon 2020, backed iBorderCtrl, a project that aimed to use people’s biometrics to help with identification and analyse people’s facial “micro-expressions” to work out if they were lying. Thirteen different companies or research groups were involved in the project that had aims, as its name suggests, of developing the technology for use at the EU’s borders.

Although iBorderCtrl suggests parts of its work resulted in “successful candidates for future systems,” reports have claimed that the AI hasn’t been able to work out whether people were lying or not. But much of the research is a secret. Patrick Breyer, a Pirate Party Germany politician and MEP, is suing the European Commission for unpublished documents on the ethics and legal considerations of the project. A decision will be made by the courts in the coming months.

Breyer, who is opposed to biometric surveillance, says the EU should not be funding research that may contradict its own stances on data protection and discrimination. “It's really scandalous that the EU helps develop technologies that will harm people,” he says, pointing to emotion detection systems being tested on Uyghurs, in Xinjiang, China.

By the time legislation is in place, Breyer worries that the technology may have become commonplace. “Once they have been developed, there's nothing to stop the companies from selling them to the private market, or even outside the EU, to authoritarian governments, to dictatorships like China,” he adds. “All these technologies will have the potential to create a ubiquitous system of surveillance for following us wherever we go, whatever we do, even for reporting us for our behaviour, which may be different from that of others.”

Updated 08.07.21, 11:10 GMT: The European Association for Biometrics remit has been clarified

More great stories from WIRED

This article was originally published by WIRED UK