AI/machine learning

European Parliament Adopts Landmark Artificial Intelligence Act

The act aims to create safeguards around general purpose artificial intelligence, limit the use of biometric identification systems by law enforcement, and ban social scoring the untargeted scraping of facial images from CCTV footage to create facial recognition databases.

Digitization of Europe
Source: gopixa/Getty Images

The European Parliament recently approved the Artificial Intelligence Act that ensures safety and compliance with fundamental rights while boosting innovation.

The regulation, agreed in negotiations with member states in December 2023, was endorsed by members of the European Parliament (MEPs) with 523 votes in favor, 46 against, and 49 abstentions.

The regulation aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk artificial intelligence (AI), while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.

“We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency,” said Italian MEP Brando Benifei. “Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very center of AI’s development”.

Banned Applications
The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorization systems based on sensitive characteristics and untargeted scraping of facial images from the internet or closed-circuit television footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behavior or exploits people’s vulnerabilities also will be forbidden.

Law Enforcement Exemptions
The use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations. Real-time RBI can be deployed only if strict safeguards are met (e.g., its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorization). Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto (post-remote RBI) is considered a high-risk use case, requiring judicial authorization being linked to a criminal offence.

Obligations for High-Risk Systems
Clear obligations are also foreseen for other high-risk AI systems (because of their significant potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law). Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g., health care, banking), certain systems in law enforcement, migration and border management, justice, and democratic processes (e.g., influencing elections). Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

Transparency Requirements
General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.

Additionally, artificial or manipulated images, audio, or video content need to be clearly labeled as such.

Measures To Support Innovation
Regulatory sandboxes and real-world testing will have to be established at the national level and made accessible to subject-matter experts and start-ups to develop and train innovative AI before its placement on the market.

Next Steps
The regulation is still subject to a final lawyer/linguist check and is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure). The law also needs to be formally endorsed by the Council.

It will enter into force 20 days after its publication in the official journal and be fully applicable 24 months after its entry into force, except for bans on prohibited practices, which will apply 6 months after the entry into force date; codes of practice (9 months after entry into force); GPAI rules including governance (12 months after entry into force); and obligations for high-risk systems (36 months).