The European Union (EU) has said it is considering a ban for artificial intelligence used in mass surveillance and social credit checks, citing the privacy risks involved.
The EU’s reluctance to deploy artificial intelligence (AI) with free-reign in the context of mass surveillance and social credit checks may result in an artificial intelligence ban, according to leaked documents.
The European Union’s artificial intelligence guidelines are expected to provide a number of protections for data privacy, and ensure that any entities or government bodies act within the General Data Protection Regulation (GDPR) guidelines.
According to leaked documents first published by Politico, the EU is hoping to separate itself from countries like China and the U.S., where artificial intelligence is widely used in the context of mass surveillance, and in China, deployed for social credit ratings.
This is said to include a requirement that members of the European Union must establish risk assessment boards tackling issues related to artificial intelligence. “Companies that develop or sell prohibited AI software in the EU – including those based elsewhere in the world – could be fined up to 4% of their global revenue,” according to reports.
It’s expected that the leaked documents will be put forward to the European Parliament on April 21st.
A report from The Verge says the leaked documents include regulations in the following areas:
- A ban on AI for “indiscriminate surveillance,” including systems that directly track individuals in physical environments or aggregate data from other sources
- A ban on AI systems that create social credit scores, which means judging someone’s trustworthiness based on social behavior or predicted personality traits
- Special authorization for using “remote biometric identification systems” like facial recognition in public spaces
- Notifications required when people are interacting with an AI system, unless this is “obvious from the circumstances and the context of use”
- New oversight for “high-risk” AI systems, including those that pose a direct threat to safety, like self-driving cars, and those that have a high chance of affecting someone’s livelihood, like those used for job hiring, judiciary decisions, and credit scoring
- Assessment for high-risk systems before they’re put into service, including making sure these systems are explicable to human overseers and that they’re trained on “high quality” datasets tested for bias
- The creation of a “European Artificial Intelligence Board,” consisting of representatives from every nation-state, to help the commission decide which AI systems count as “high-risk” and to recommend changes to prohibitions
EU Considers Ban for Artificial Intelligence Used in Mass Surveillance
Daniel Leufer, who works as a European policy analyst has told The Verge that “the descriptions of AI systems to be prohibited are vague, and full of language that is unclear and would create serious room for loopholes.”
This was echoed by Omer Tene, the vice president of the International Association of Privacy Professionals, who said that the regulation “represents the typical Brussels approach to new tech and innovation: when in doubt, regulate.”
Michael Veale, lecturer in digital rights and regulation at the University College London has said that the regulation would not likely restrain big tech companies from operating. “In its sights are primarily the lesser known vendors of business and decision tools, whose work often slips without scrutiny by either regulators or their own clients,” he said.
“Few tears will be lost over laws ensuring that the few AI companies that sell safety-critical systems or systems for hiring, firing, education and policing do so to high standards. Perhaps more interestingly, this regime would regulate buyers of these tools, for example, to ensure there is sufficiently authoritative human oversight.”