Artificial intelligence and privacy: A balancing exercise

Key Contacts: Eoghan Doyle – Partner  |  Hugh Grattirola – Senior Associate

The recent development of new technologies relying on artificial intelligence (“AI”) across the world has shown the innovation, opportunities and potential value to society AI can undeniably bring. Partly due to the Covid crisis which encouraged the early adoption of automated processes, there has been a significant uptake of AI in recent times among businesses. AI is changing how companies operate across almost every sector, and notably fintech, healthcare, human resources, insurance, the internet of things, to name a few.

Focussing on Ireland, an interesting study has shown that nearly two thirds of businesses are likely to use AI (or machine learning, one of AI’s major subfields) by 2023. Many businesses see AI as being capable of bringing a competitive edge to the table by speeding up processes and driving cost savings. This is mainly because AI works by combining large amounts of data using fast processing and complex algorithms to enable software to learn automatically from patterns or features in the data.

As is often the case with new technologies, the enthusiasm around AI comes with a number of concerns which are mainly centred on privacy and fundamental human rights generally. A good illustration of such concerns is the recent resolution passed on 6 October 2021 by the Members of the European Parliament (“MEPs”) on the use of AI in criminal law and by the police and judicial authorities in criminal matters (the “Resolution”).

The Resolution aims to address the issues arising from AI solutions when used for law enforcement and the judiciary. In particular, it seeks to combat discrimination and demand strong safeguards when AI is used in such circumstances.

Reaffirming the need for AI solutions to fully respect the principles of human dignity, non-discrimination, freedom of movement, the presumption of innocence and right of defence, the MEPs pointed out to the risk of discrimination that could arise from AI. As an example, the MEPs flagged that, in some cases, AI-based identification technologies showed that they could disproportionately misidentify and misclassify, and as a result cause harm by discriminating, against individuals.

The use of AI for surveillance, mass profiling, automated analysis and/or recognition in publicly accessible spaces was also stated by the MEPs as examples of practices that should be subject to a strict and permanent prohibition in light of the risks involved. On this topic, the Resolution was followed by a recommendation for appropriate artificial intelligence ethics signed on 24 November 2021 by 193 members (including China but not the US interestingly) of the United Nations’ Educational, Scientific and Cultural Organisation (UNESCO).

The Resolution also placed a clear focus on the risks arising from the use of AI to make automated decisions that could have adverse legal effects on individuals. MEPs called for a compulsory fundamental rights impact assessment to be conducted prior to the implementation or deployment of any AI system for law enforcement or the judiciary. The idea being to retain a level of involvement by an actual human being who would remain accountable for the decisions made via automated means.

The Resolution was passed in the context of the recent proposal by the European Commission for a regulation aimed at establishing a harmonised framework for AI at EU level (the “EU AI Regulation” – see our article on this topic here). The EU AI Regulation forms part of a broader strategy from the European Commission to address the risks generated by AI and put in place a set of rules with extra-territorial application in a manner similar to the General Data Protection Regulation (“GDPR”).  Following the passing of the Resolution, the EU Council shared a first compromise text of the EU AI Regulation on 29 November 2021. Interestingly, this compromise text appears to have taken into account some of the points raised in the Resolution (by, for example, extending the ban on the use of AI systems for social scoring purposes from public authorities to private entities, and by classifying as high risk the use of AI-based biometric systems for real time identification of individuals without their agreement).

The European approach to AI is very much risk-based and is closely linked with that taken under the GDPR. Indeed, as AI relies on large amounts of data, it is likely that concepts and principles of data protection such as accountability for the risks arising, transparency of the processing undertaken, minimisation of the data used, risk assessment and mitigation and rights of the individuals affected will come into play and have to be factored in when operating a sophisticated AI system.

It will be interesting to see what level of regulation will be implemented in the EU and how such regime will interact with data protection legislation and other sector-specific regulations. Being already a complex technology, AI will no doubt represent a complex area of the law through which organisations will need to navigate carefully.



News and insights

M&A Insights H1 2023 – A cross-border deal insight guide

M&A Insights H1 2023 – A cross-border deal insight guide

Key Contacts: Eoghan Doyle - Partner  |  Andrew Tzialli - Partner We are delighted to introduce the fifth edition of the Philip Lee M&A Insights guide. In this edition, we review deal activity across H1 2023, and consider the outlook for the rest of the year...

read more
The Web3 passport

The Web3 passport

Key Contacts: Andrew Tzialli - Partner  | Head of blockchain and crypto assets, Andrew Tzialli, spoke to the Web3 passport in their latest podcast to discuss the world of blockchain and digital assets. The Web3 passport podcast explores areas such as Web3, blockchain,...

read more
Share This