Key Contacts: Eoghan Doyle – Partner  |  Hugo Grattirola – Senior Associate

A. Introduction

On the 21st of April 2021, the European Commission set out their long-awaited ambitious proposal for the regulation of artificial intelligence (“AI”) systems (the “AI Regulation”). The aim of the AI Regulation is to set a wide-reaching standard for the harmonisation of ethical usage of AI in all its forms whilst strengthening the uptake, investment, industry capacity and innovation of AI in the EU. In an everchanging digital decade, this approach aims to establish Europe as the central hub of trustworthy and horizontally regulated AI in the global market, boosting competition and fostering AI’s potential for excellence ‘from the lab to the market’. The AI Regulation signals a great evolution of international algorithmic governance for 2022 with many superpowers now expressing a desire to follow suit. The Commission explicitly emphasises the central needs of the citizen throughout the proposal in guaranteeing fundamental rights of both natural and legal persons, security and protecting general interests.

B. Background

The first in-depth analysis of the policy and regulatory options for AI was explored through the publishment of a White Paper in February 2020 by the European Commission on AI. Following the adoption of three legislative resolutions in October 2020, the Commission produced what is known as the “AI package” in April 2021 containing the principle proposal for the AI Regulation, together with a Communication on Fostering a European Approach to Artificial Intelligence and a Coordinated Plan on AI as well as a proposal for a new Regulation on machinery products (to replace the existing Machinery Directive and address the use of AI in machinery). The AI Regulation proposal is currently at its ordinary legislative procedure stage after the feedback period ceasing on 6th August 2021. The EU Council has since published their latest amendments to the draft AI regulation on 29th November 2021.

C. Scope

The AI Regulation proposes a very broad regulatory scope both materially and territorially:

  1. How is AI defined?

The AI Regulation (in its latest version as amended by the EU Council) defines an “AI system” as a system that:

  • receives machine and/or human-based data and inputs;
  • infers how to achieve a given set of human-defined objectives through learning, reasoning, or modelling, provided that the system was developed using one or more of the following techniques and approaches:
    1. machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
    2. logic/knowledge-based approaches, including knowledge representation, inductive programming, knowledge bases, inference, and deductive engines, reasoning, and expert systems; and
    3. statistical approaches, Bayesian estimation, search, and optimization methods; and
  • generates outputs in the form of content (generative AI systems), predictions, recommendations, or decisions, which influence the environments it interacts with.

This definition distinguishes AI from classic IT and was purposively drafted so as to be as technologically neutral and future-proof as possible. An approach similar to that taken by the Commission regarding the GDPR.

  1. Who does the AI Regulation apply to?

The AI Regulation will have extraterritorial reach in that it will apply to:

  • providers developing and placing on the market or putting into service AI systems in the European Union, irrespective of whether those providers are physically present or established within the European Union or in a third country;
  • users of AI systems who are physically present or established within the European Union;
  • providers and users of AI systems that are physically present or established in a third country, where the output produced by the system is used in the European Union;
  • importers and distributors of AI systems;
  • product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark; and
  • authorised representatives of providers, which are established in the Union.

Interestingly, the AI Regulation states it does not apply to:

  • AI systems developed or used exclusively for military or national security purposes;
  • Public authorities in a third country nor to international organisations falling within the scope of the AI Regulation, where those authorities or organisations use AI systems in the framework of international agreements for law enforcement and judicial cooperation with the EU or with one or more Member States;
  • AI systems, including their output, specifically developed and put into service for the sole purpose of scientific research and development; and
  • Any research and development activity regarding AI systems in so far as such activity does not lead to or entail placing an AI system on the market or putting it into service.

D. Risks levels and requirements for providers of AI systems 

The Commission aims to address the risks generated by specific uses of AI through a set of proportionate and flexible rules encompassing all aspects of the lifecycle of the development, sale, and use of AI without preventing scientific research in the area. Like GDPR, the AI Regulation is to follow a risk-based approach. Thus, the higher the potential dangers in an area of application, the higher the regulatory requirements for the AI system. The proposal essentially distinguishes between four groups:

  1. Prohibited/Unacceptable-Risk AI Systems – the “Blacklist”

The AI Regulation expressly prohibits the following AI practices as presenting an unacceptable risk to individuals (whether they consist in placing on the market, putting into service, or using an AI system):

  • Subliminal, manipulative, or exploitative AI systems that cause physical or physiological harm.
  • Real-time, remote biometric identification AI systems used in public spaces for law enforcement.
  • All forms of social scoring, such as AI or technology that evaluates an individual based on social behaviour or predicted personality traits.
  1. High Risk AI Systems

The AI Regulation places a clear emphasis on high-risk AI systems, which are subject to extensive technical, monitoring and compliance obligations by virtue of the potential social or economic threat such systems might pose to the environment and critical infrastructure. The AI Regulation provides a list of what are considered high-risk AI systems which for example includes systems that:

  • evaluate consumer creditworthiness;
  • monitor and manage employees/students;
  • are used for law enforcement purposes and/or the administration of justice;
  • use of biometric systems without ones agreements/judicial approval; or
  • record access and enjoyment of private/public services.

The AI Regulation expressly provides that the EU would review and update the list of AI systems that are considered high risk on a two year basis.

In terms of requirements, high risk systems will give rise to obligations in a number of areas including, among others, transparency, human oversight, risk management (including cybersecurity), data quality, monitoring and reporting and record keeping. These obligations will apply to those building, selling, or using high-risk AI systems. Of particular importance is the obligation to conduct “conformity assessments”, which are impact assessments analysing data sets, potential biases, user interaction and overall design and monitoring of system outputs, at risk for use in social scoring prior to placing the AI system on the market/ into service.

  1. Limited Risk AI Systems

AI systems identified as presenting limited risks will be subject to specific transparency requirements. An example is the use of AI for the operating of a chatbot. In such scenario, users should be made aware that they are interacting with a machine so they can take an informed decision to continue or step back.

  1. Minimal Risk AI Systems (Majority AI systems used in EU)

Supposedly, the majority of general-purpose AI systems currently in use in the European Union will fall within this category. Under the AI Regulation, these AI systems can be developed/used subject to the existing legislation without further legal obligations.

E. Enforcement/Fines

In terms of enforcement in the event of non-compliance, the AI Regulation puts the onus on Member States to lay down the rules on penalties which shall be effective, proportionate, and dissuasive. The AI Regulation provides for the following levels of fine:

  • for infringements on prohibited practices or non-compliance related to requirements on data governance, up to €30M or 6% of the offender’s total worldwide annual turnover for the preceding financial year, whichever is higher;
  • for non-compliance with any of the other requirements or obligations of the AI Regulation, up to €20M or 4% of the offender’s total worldwide annual turnover for the preceding financial year, whichever is higher; and
  • for the supply of incorrect, incomplete, or misleading information to notified bodies and national competent authorities in reply to a request, up to €10M or 2% of the offender’s total worldwide annual turnover for the preceding financial year, whichever is higher.

The AI Regulation also provides for the establishment a European Artificial Intelligence Board (“EAIB”) to advise and assist the Commission and to facilitate effective cooperation and consistent application between the national supervisory authorities.

F. When will it come into force?

The AI Regulation remains to be voted on and approved by the European Parliament and the representatives of the member states of the European Union in the European Council. It is also envisaged that the AI Regulation will apply two years following its entry into force. Accordingly the regulation could be implemented in the second half of 2022 for a transitional period. The AI Regulation is likely to be applicable to operators in the second half of 2024.

G. How to prepare for its entry into force?

As a preliminary step, organisations and providers will want to establish a robust risk management life cycle focusing on:

  • identifying/inventorying the AI systems relied on by the organisation and the risks such systems represent (low, medium, high), together with what measures are in place to mitigate such risks;
  • implementing conformity assessments to review whether the AI systems meet applicable regulations and other standards; and
  • establishing some form of governance structure for controlling, monitoring, and reporting the relevant developments in standards and compliance requirements, such structure could include a team of professionals from a variety of functions, including cybersecurity, legal, and technology.

In a similar fashion to the regime brought about by the GDPR, regulators will most likely require organisations to have processes in place for governance and oversight. Accountability, documenting obligations and audit trails are also likely to form part of the boxes to be ticked by AI companies in the value chain to be fully compliant.

H. Ireland – AI is here to stay

AI has evidently been endorsed by the Irish government as a sector in which Ireland intends to position itself as a focal point for innovative technology developments. In its recent publicationAI: Here for Good, A National Artificial Intelligence Strategy for Ireland” (the “Strategy”), the government set out its strategy to exploit AI in a people-centred, ethical way to retain Ireland’s global competitiveness and future productivity.

The Strategy’s emphasis on driving the adoption of AI across Irish enterprise through collaboration between industry and academic research, and between SMEs and multinationals, is welcome, and should create many opportunities for innovators to lawfully maximise the utilisation of AI in business.