Key Contacts: Sean McElligott – Partner

Euractiv is reporting that yesterday, following months of intense negotiations, members of the European Parliament have bridged their differences and reached a provisional political deal on the world’s first Artificial Intelligence rulebook.

The text might be subject to minor adjustments at the technical level ahead of a key committee vote scheduled on 11 May, but it is expected to go to a plenary vote in mid-June.

Until the last moments, the EU lawmakers were seeking compromise on some of the most controversial parts of the proposal, set out below.

General Purpose AI

How to deal with AI systems that do not have a specific purpose has been a contentious topic in the discussions. The only significant last-minute change was on generative AI models, which would have to be designed and developed in accordance with EU law and fundamental rights, including freedom of expression.

Prohibited practices

Another politically sensitive topic was what type of AI applications are to be banned because they are considered to pose an unacceptable risk.  The idea was previously floated to prohibit AI-powered tools for all general monitoring of interpersonal communications but the proposal was dropped.  There was however an extension of the ban on biometric identification software. Initially only banned for real-time use, this recognition software could be used ex-post only for serious crimes and with pre-judicial approval.

The use of emotion recognition AI-powered software is banned in the areas of law enforcement, border management, workplace, and education.  The EU lawmakers’ ban on predictive policing was extended from criminal offenses to administrative ones.

High-risk classification

The initial proposal automatically classified AI solutions falling under the critical areas and use cases listed in Annex III as high-risk, meaning providers would have to comply with a stricter regime including requirements on risk management, transparency and data governance.  MEPs introduced an extra layer, meaning that an AI model that falls under Annex III’s categories would only be deemed at high risk if it posed a significant risk of harm to the health, safety or fundamental rights.

AI used to manage critical infrastructure like energy grids or water management systems would also be categorised as high risk if they entail a severe environmental risk.  In addition, the recommender systems of very large online platforms, as defined under the Digital Services Act (DSA), will be deemed high-risk.

Detecting biases

Extra safeguards have been included for the process whereby the providers of high-risk AI models can process sensitive data such as sexual orientation or religious beliefs to detect negative biases.  In particular, for the processing of such a special type of data to happen, the bias must not be detectable by processing synthetic, anonymised, pseudonymised or encrypted data.  The assessment must happen in a controlled environment and the sensitive data cannot be transmitted to other parties – it must be deleted following the bias assessment. The providers must also document why the data processing took place.

Sustainability of high-risk AI

High-risk AI systems will have to keep records of their environmental footprint, and foundation models will have to comply with European environmental standards.