NAALA | Not An Average Legal Advisor

AI Act: overwhelming, or comprehensive system?

AI Act: overwhelming, or comprehensive system?

hitesh-choudhary-t1PaIbMTJIM-unsplash

Anne Sophie Dil

Co-founder of NAALA

Published on 15 September 2021

I bet every healthcare technology developer is talking about artificial intelligence (AI) at some point, if not already applying it. AI holds a lot of promise. Like many technologies, AI can transform healthcare. Not surprisingly, many opportunities are seen in the medical application of AI.

Innovation brings opportunity, but novelty often also brings risk. Especially in healthcare. In April 2021, the EU published a proposal for a Regulation on the harmonization of rules on AI in general (AI Act). This proposal is intended in part to assure Europeans that they can trust what AI has to offer, even if the AI system creates risks.

AI systems in healthcare are likely to be categorized as “high risk AI”, and hence subject to the proposed AI Regulation. When an AI system can influence a patient’s diagnosis or treatment, it can have far-reaching consequences if a mistake is made. We’ve heard that before: when software becomes a medical device. The proposed AI Act: yet another set of new regulations following GDPR and MDR. Are the various legislations overwhelming for medical technology developers, or does it provide a comprehensive system?

The European Commission proposal defines an AI system as follows:

Software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

The AI techniques and approaches listed in Annex I of the proposal are the following:

  • Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning,
  • Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems, and
  • Statistical approaches, Bayesian estimation, search and optimization methods.

It follows from the proposal that the AI Act will have a broad scope. The provisions will apply to:

  • organizations providing AI systems in the EU, even if the organization is not itself located in the EU,
  • users of AI systems who are in the EU; and
  • providers and users of AI systems that are not in the EU, if the output of the system is used in the EU.

The proposed AI Act places specific obligations on so-called ‘high-risk AI systems’. These obligations must be fulfilled before the AI system can be marketed in the EU. High-risk AI systems are defined as: AI systems that create a high risk to the health and safety or fundamental rights of natural persons.

Each system will need to be assessed on its own merits, but it can be assumed that AI systems in healthcare are more likely to be considered high-risk. The classification of an AI system as high-risk is based on the intended purpose of the AI system. This means: the purpose for which the system was marketed. Therefore, the classification as high-risk does not only depend on the function performed by the AI system, but also on the specific purpose and modalities for which that system is used.

An AI system is to be considered high-risk if:

  • it is intended to be used as a safety component of a product, or is a product itself (i.e., standalone), covered by one of the EU legislations listed in the proposed AI Act, and
  • that specific EU legislation requires the finished product to be subject to a conformity assessment procedure.

The listed legislation in the proposed AI Act concerns product safety legislation. This includes legislation for medical devices. Therefore, AI systems that are classified as medical devices under the Medical Devices Regulation (MDR) – and as such, must go through one of MDR’s conformity assessment procedures – are also subject to the proposed AI Act.  

The new regulations for AI are being drafted according to the same framework as the MDR was. This New Legislative Framework (NLF) aims to improve the internal market for goods. It strengthens the conditions for bringing products to the EU market. As such, the NLF is designed to (equivalently) regulate products in the EU market.

Since both the proposed AI Act and the MDR are drafted according to this framework, many of the requirements are reflected in the structure of both regulations. For example:

  • the establishment and maintenance of a risk management system
  • drawing up technical documentation for the product, and retaining it for a period of 10 years
  • issuing instructions for use to inform the user of the intended purpose and proper use of the product
  • the establishment and maintenance of a quality management system
  • the performance of tests and validation activities to demonstrate quality
  • following a conformity assessment procedure
  • setting up and maintaining a post-market surveillance system, including taking corrective action if necessary
  • setting up a system for reporting serious incidents to the relevant authorities
  • the affixing of a CE marking, and the drawing up of a Declaration of Conformity, to demonstrate conformity with the applicable rules
  • where possible, the use of harmonized standards and/or common specifications

AI system providers that already comply with the MDR will likely need to update the above elements in response to the proposed AI Act. Providers of AI systems that are not yet compliant with the MDR, but will become subject to it due to future developments, will be able to implement both regulations simultaneously in the above elements. 

Clearly, the application of AI requires a variety of data. After all, an algorithm is not easily trained. It requires a multitude of data. When this data includes personal information, the General Data Protection Regulation (GDPR) will also apply.

The use of an AI system to process personal data often results in profiling and/or automated decision making. As shown in our previously published blog on the rulings of the Italian data protection authority on automated decision making, the use of this form of data processing requires consideration of a few conditions:

  • fairness, i.e., no discrimination,
  • transparency, i.e., clear information about the automated decision making, and
  • the right to human intervention

The proposed AI Act also addresses these conditions:

Ad 1) The proposal explicitly addresses the prevention of discrimination, hence emphasizing fairness, by using high quality data. In addition, it enables AI system providers to use special categories of personal data, such as health data, when necessary to monitor, detect and correct bias. The use of high-quality data contributes to fairness, but it may be insufficient to ensure it.

Ad 2) The proposal stipulates that regulated AI systems must be sufficiently transparent so that users can correctly interpret and use the outcomes. Understandable instructions for use must be supplied with the AI system.

Ad 3) The proposal addresses human oversight. Someone (or several people) will be designated to oversee the AI system. This person should be able to intervene if the AI system is not functioning fairly or properly or is in danger of functioning poorly.

According to the proposal, the regulation could enter into force in the second half of 2022. The transition period will then start. This means that providers of AI systems will have time from then on to ensure that they start to give substance to the provisions of the regulation. It is expected that the provisions will become applicable from the second half of 2024. From that moment on, the rules will be enforceable.

Do you have an AI system that is already on the market before the second half of 2024? Then your AI system will enjoy an additional transition period of (expected) 12 months. In the second half of 2025, the AI system must then be in compliance with the requirements of the AI Regulation.

If an organization unjustifiably fails to comply, a fine may be imposed:

  • up to €30,000,000 or 6% of their total global revenue, in case of non-compliance with the prohibited AI systems or data and data governance requirements
  • up to €20,000,000 or 4% of their total global revenue, in case of non-compliance with other requirements
  • up to €10,000,000 or 2% of their total global revenue, if incorrect, incomplete or misleading information is provided to the authorities or notified bodies.

For some, the new AI Act raises more questions than it clarifies. However, the proposed regulation clearly shows similarities and correlation with existing legislation on medical devices and personal data protection.

There are different approaches to a multitude of regulations. Organizations can choose to apply each regulation as an individual set of rules, or opt for a comprehensive system to address overlapping requirements.

Do you develop medical software? Are you curious about the possible impact of this new legislation on your organization and product? We would be happy to think along with you at this stage, for example to determine a plan of approach or roadmap in preparation for the implementation of requirements relevant to you. Please feel free to reach out.

Please note that all details and listings do not claim to be complete, are without guarantee and are for information purposes only. Changes in legal or regulatory requirements may occur at short notice, which we cannot reflect on a daily basis. 

Other articles you may be interested in:

Liked the article? Maybe others will too. Feel free to share!