Key Contact: Sean McElligott – Partner

It now looks like it will be July/August before publication of the European Union’s Artificial Intelligence (AI) Act.  This means that the Article 5 prohibited artificial intelligence practices, deemed to be the greatest threat to safety, will be expressly prohibited throughout the EU, within six months i.e. by January/February 2025. Businesses face a quick turnaround to ensure their systems adhere to the requirements of the Act or face stiff penalties for non-compliance.

The language used in Article 5 is arguably open to interpretation in places, and there is therefore a risk that the breadth of the Article 5 prohibitions might result in businesses who do not consider their AI tools to be prohibited, nonetheless falling foul of Article 5.  The publication of the much anticipated Article 5 guidelines from the AI Office will be an essential tool in assisting businesses to navigate around Article 5.

Many of the prohibited practices have no intent requirement, meaning that entities whose AI tools could be said to have a “manipulative” effect on behaviour, regardless of their aim, run the risk of falling foul of Article 5.

Eight AI practices will be expressly prohibited from, most likely, January/February 2025:

  1. Article 5(1)(a) prohibits the use of an AI system that deploys “subliminal techniques” that materially distort a person’s behaviour, causing them to take a decision they would not have otherwise taken in a manner likely to cause significant harm. It is interesting to note that the phrase “significant harm” is used four times throughout the AI Act yet is undefined. While the term has been used in legislation in other areas, no guidance is yet available in respect of its use with respect to AI. Additionally, the AI Act does not define what constitutes a ‘subliminal technique.” A broad reading of the term could in theory implicate many AI tools in marketing or personalization, like automated strategic placement of advertisements on a web page.
  2. Article 5(1)(b) prohibits AI systems that exploit the vulnerabilities of a person due to age, disability, or a specific social or economic situation with the objective or effect of materially distorting behaviour in a manner that causes or is reasonably likely to cause significant harm. One possible unexpected application of this may be that companies that use AI tools to generate customised mortgage rates for potential borrowers may find their products restricted if such tools base their output on a protected characteristic.  
  3. Article 5(1)(c) restricts the ability of AI systems to use social scoring techniques in circumstances where the information is used in a context for which it was generated or so long as any detrimental effect it generates is not disproportionate to the behaviour. Without clear definitions, such a provision may result in difficulties for AI tools that scan social media profiles of candidates in the hiring process, or potentially tools used by insurers to track drivers and lower their rates based on performance.
  4. Article 5(1)(d) bans AI systems used for making risk assessments of persons to assess or predict their likelihood of committing a criminal offence, based solely on profiling or on assessing their personality traits and characteristics. The Act provides a narrow carveout for systems used to support the human assessment of the involvement of a person in criminal activity. This subsection could, for example, impact security systems used at stadiums and other large events that rely on AI to make predictions and identify potential threats regarding those entering and leaving the venue.
  5. Article 5(1)(e) prohibits AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage. This could arguably result in facial recognition tools commonly used for banking, social media, and other mobile applications having to tweak the data used to train their system to ensure compliance.
  6. Article 5(1)(f) bans the use of AI systems that infer emotions in the workplace and education institutions, except where the use of the system is intended for medical or safety reasons. This prohibition might impact popular AI tools which have become commonplace as an initial screener during the hiring process.
  7. Article 5(1)(g) prohibits biometric categorisation systems that categorise persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. However, labelling or filtering of lawfully acquired biometric datasets or categorizing of biometric data for law enforcement is allowed. AI systems used to compile ancestry information for users or those that assist voters in determining which candidate best aligns with their values could run afoul of this provision.
  8. Finally, Article 5(1)(h) bans the use of ‘Real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for:
  • The targeted search for specific victims of abduction, trafficking or sexual exploitation, or searching for missing persons.
  • The prevention of a specific, substantial and imminent threat to life or physical safety or a threat of a terrorist attack.
  • The localisation or identification of a person suspected of having committed a serious criminal offenceThe localisation or identification of a person suspected of having committed a serious criminal offence.

It is conceivable that tools used by local law enforcement, particularly those near major tourist destinations, may need to limit their collection of data, except in the aforementioned situations. Additionally, tools used by transportation systems, such as those used to catch fare evaders, might conceivably also be caught by Article 5.

The penalties applicable for breaches of Article 5 are even more eye-watering than the well-known GDPR penalties.  Non-compliance with Article 5 can result in administrative fines of the higher of 7% of the undertaking’s worldwide annual turnover for the preceding fiscal year, or up to €35,000,000. Supplying incorrect or misleading information to relevant authorities exposes an undertaking to fines of €7,500,000 or one percent of its worldwide annual turnover, whichever is higher. For small and medium enterprises and start-ups, the calculation of administrative fines is adjusted slightly, and the lower of the two fines for each category of violation would be imposed.

While at first glance, the Article 5 prohibitions appear to apply to a very limited subset of possible AI activities, the vagueness of the language used gives rise to concerns that other activities, that might not typically be considered serious enough to be prohibited, could be caught unintentionally, by the Act.  The sooner we get sight of the promised guidelines from the AI Office, the better.