The EU AI Act

Published on: 
Pharmaceutical Technology, Pharmaceutical Technology, November 2023, Volume 47, Issue 11
Pages: 8-9

The EU’s AI Act is set to become the world’s first comprehensive legal framework for artificial intelligence.

The use of artificial intelligence (AI) in the European Union (EU) will be governed by the AI Act, which is set to become the world’s first comprehensive legal framework for AI. The AI Act forms part of the EU’s digital strategy and was originally proposed by the European Commission (EC) in April 2021. The general approach policy was later adopted by the European Council in 2022, with the European Parliament most recently adopting its position in mid-June 2023. Following these developments, the three bodies will now negotiate the final details before the policy can become law, in a process called “trilogue,” or a three-way negotiation (1).

The AI Act aims to establish harmonized rules for the development, placing on the market, and use of AI in the EU, and its principal aim is to turn the EU into a global trustworthy hub for AI. The scope of the AI Act is very wide, covering systems developed through an array of approaches that are listed in Annex I of the AI Act. These include machine learning approaches that also incorporate deep learning; logic and knowledge-based approaches; as well as statistical approaches, Bayesian estimation, search, and optimization methods (2).

Objectives of the European Parliament

The European Parliament’s priority is “to make sure that AI systems used in the EU are safe, transparent, traceable, prevents bias and discrimination, fosters social and environmental responsibility, and ensures respect for fundamental rights” (3). Crucially, the European Parliament believes that AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes (4).

The parliament also aims to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems. To this end, AI is defined as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” (1). Interestingly, this definition of AI is focused on its outputs and objectives, rather than its underlying technology or algorithms because the regulation aims to establish a framework for the ethical and trustworthy development and use of AI systems in the EU (5).

As regards intellectual property (IP) rights, the European Parliament has also stressed the importance of addressing issues relating to patents and new creative processes, as well as resolving questions of ownership relating to something that is entirely developed by AI (3).

A risk-based approach to AI

The new rules follow a risk-based approach so that AI systems can be effectively assessed. Through this methodology, the European Parliament establishes obligations for providers and users depending on the level of risk that AI can generate. Providers are those who “develop an AI system to place it on to the market or put it into service under their own name or trademark” (6). The Act splits AI into the following four bands of risk based on the intended use of a system:

Unacceptable risk. Under the new proposals, AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited. These include systems that deploy subliminal or purposefully manipulative techniques (such as cognitive behavioural manipulation) to exploit people’s vulnerabilities; systems that are used for social scoring (by classifying people based on their social behaviour, socio-economic status, and personal characteristics); and the use of real-time and remote biometric identification systems, such as facial recognition (7).

Advertisement

High risk. High-risk AI systems are subject to a detailed certification regime but are not deemed so fundamentally objectionable that they should be banned. These systems are divided into two categories:

AI systems that are used in products that come under the EU’s General Product Safety (8) legislation that includes toys, aviation, cars, medical devices, and lifts

AI systems that fall into eight specific areas that will have to be registered in an EU database:

  • biometric identification and categorization of natural persons
  • management and operation of critical infrastructure
  • education and vocational training
  • employment, worker management, and access to self-employment
  • access to and enjoyment of essential private services and public services and benefits
  • law enforcement
  • migration, asylum, and border control management
  • assistance in legal interpretation and application of the law (4).

All systems deemed ‘high risk’ will be assessed before being placed on the market and throughout their lifecycle.

Limited risk. Limited risk AI systems are required to comply with minimal transparency requirements that would allow users to make informed decisions. This category includes AI systems such as chatbots, emotion recognition and biometric categorization systems, and systems generating ‘deepfake’ or synthetic content (6). The legislation stipulates
that users should be made aware when they are interacting with AI and be given an option whether they want to continue using it.

Minimal risk. Minimal risk includes applications such as spam filters or AI-enabled video games for which the commission proposes regulation primarily by voluntary codes of conduct.

The AI Act will also establish a European Artificial Intelligence Board, which will be responsible for overseeing the implementation of the regulation and ensuring uniform application across the EU. The body will be tasked with putting forward recommendations on issues that arise as well as providing guidance to national authorities. According to the legislation, the board should reflect the various interests of the AI ecosystem and be composed of representatives of the EU member states (9).

Implications for the health technology sector

According to Burges Salmon (10), the AI Act is intended to directly affect health technology companies whose AI systems are placed on the EU market, and are subject to third-party conformity assessments (made by notified bodies), as stipulated by the EU’s Medical Devices Regulation (MDR), [Regulation (EU) 2017/745] (11) and In Vitro Diagnostic Regulation (IVDR), [Regulation (EU) 2017/746] (12). These systems include AI-enabled diagnostic tools, therapeutic devices, or implantable devices such as pacemakers. Furthermore, the AI Act will also affect MedTech companies whose AI systems are not directly affected by the proposed legislation, including AI-enabled general practitioner (GP) apps, patient chatbots, and fall detection systems (assuming that these systems are not already subject to the MDR or IVDR regulations).

For health technology companies whose products fall under the category of ‘High Risk’ AI systems, they will have to meet significant obligations in relation to:

  • reporting requirements to consumers
  • transparency to users
  • data protection and governance
  • technical documentation
  • record keeping
  • risk management
  • human oversight
  • robustness, accuracy, and security.

Although the safety risks specific to AI systems are meant to be covered by the AI Act, the overall safety of the product, and how the AI system is integrated, will be addressed by the conformity assessment under the MDR or IVDR regulations, given that the AI Act is intended to “be integrated into the existing sectoral safety legislation [including the MDR and IVDR] to ensure consistency, avoid duplications, and minimize additional burdens” (10).

Next steps

On 14 June 2023, Members of the European Parliament adopted a negotiating position on the AI Act, with talks now taking place with EU countries in the council regarding the final form of the law. The aim is to reach an agreement by the end of this year, and it is possible that the new legislation could enter into force in 2023. The majority of the provisions will then apply another 24 months later, during which time companies and organizations will have to ensure that their AI systems comply with the requirements and obligations set out in the regulation.

References

  1. Lynch, S. Analysing the European Union AI Act: What Works, What Needs Improvement. Human-Centred Artificial Intelligence (HAI) Stanford University, hai.stanford.edu, 21 Jul. 2023.
  2. EC. Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. 21 April 2021.
  3. European Parliament News. AI Rules: What the European Parliament Wants. Europarl.europa.eu, 21 Oct. 2020.
  4. European Parliament News. EU AI Act: First Regulation on Artificial Intelligence. Europarl.europa.eu, 8 Jun. 2023.
  5. Taylor Wessing. Introduction on the AI Act of the European Union. Insights (accessed 10 Oct. 2023).
  6. Edwards, L. The EU AI Act: A Summary of its Significance and Scope. Ada Lovelace Institute, April 2022.
  7. European Parliament News. AI Act: A Step Closer to the First Rules on Artificial Intelligence. Europarl.europa.eu, 11 May 2023.
  8. EC. Directive 2001/95/EC of the European Parliament and of the Council on General Product Safety. 3 Dec. 2001.
  9. MacCarthy, M.; Propp, K. Machines Learn that Brussels Writes the Rules: The EU’s New AI Regulation. Brookings Institution, Commentary, 4 May 2021.
  10. Whittaker, T.; Slocock, A. The EU’s Proposed AI Act: A Quick Guide for Health Tech. Burges Salmon, Press Release, 25 Jul. 2022.
  11. EC. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on Medical Devices, Amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and Repealing Council Directives 90/385/EEC and 93/42/EEC (Text with EEA relevance). Current consolidated version: 20 Mar. 2023.
  12. EC. Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on In Vitro Diagnostic Medical Devices and Repealing Directive 98/79/EC and Commission Decision 2010/227/EU. Current consolidated version: 20 Mar. 2023.

About the author

Bianca Piachaud-Moustakis is lead writer at PharmaVision, Pharmavision.co.uk.

Article details

Pharmaceutical Technology Europe
Vol. 35, No. 11
November 2023
Pages: 8–9

Citation

When referring to this article, please cite it as Piachaud-Moustakis, B. The EU AI Act. Pharmaceutical Technology Europe 2023 35 (11).