The EU AI Act: What It Means for Healthcare

Over the past four years, the global medical market has dramatically reinvented itself to withstand a number of unpredictable and volatile events. The COVID-19 pandemic set into motion a need for unshakeable supply chain-efficiency in the wake of worldwide lockdowns. Geopolitical conflicts (e.g., Russia-Ukraine war, Israel-Palestine conflict, etc.) led innovators to reimagine how medical care could be provided in the midst of warfare. New treatments for chronic conditions emerged, providing relief and resolution for millions around the world. 

eu ai act healthcare document trend
Within the AlphaSense platform, we’ve noticed a more than 75% increase in documents mentioning “EU AI Act” over the past year. 

Simultaneously, artificial intelligence (AI) and its use cases rapidly expanded with the rollout of generative AI (genAI). From drug discovery to R&D, workflow automation, to massive data analysis, vaccine development, and more, excitement remains palpable for this new iteration of AI as researchers and developers wield it to fulfill once tedious, time-consuming processes. However, this excitement has led to some speculation about where the technology could fall short.

Privacy concerns regarding patient information, biased or prejudicial algorithms, and unethical usage of genAI have stirred up controversy among skeptics, leading to a new legislative framework that safeguards both patients and providers. Specifically, the EU recently introduced the Artificial Intelligence Act (EU AI Act), “the world’s first comprehensive AI law.” 

While the Act more broadly applies to corporations who leverage AI and genAI for consumer use, it sets parameters in accordance with EU policy to “protect and improve health, giving equal access to modern and efficient healthcare for all Europeans, and coordinating any serious health threats involving more than one EU country.”

Having been passed this May by the Council of the European Union, the Act is leading medtech leaders to not only question how these stipulations affect tools already in circulation, but what it means for a future where AI is rooted in healthcare. Below, we use the AlphaSense platform to unveil how the EU AI Act will reshape the industry in the coming months and years. 

The EU AI Act

At the most fundamental level, the Act sets parameters for how artificial intelligence and genAI is leveraged and distributed within the healthcare sectors. The National Library of Medicine states that the Act “aims to set out (a) uniform internal market rules ‘for the development, placing on the market, putting into service and the use of artificial intelligence’ in a manner that (b) promotes ‘human centric and trustworthy artificial intelligence’ and (c) ‘ensures a high level of protection of health, safety, fundamental rights.’”  

In the first few year of its enactment, the Act will initially target general-purpose AI systems, and in two years’ time, extend to those that include AI-enabled digital health tools (DHTs), and eventually cover high-risk AI systems (i.e. DHTs). 

It simultaneously introduces a range of new obligations for developers, deployers, notified bodies, and regulators, particularly those working with the newly created EU AI Office and the European Commission. Like other EU regulations, it pertains to DHTs developed that are marketed and used outside the EU, or more specifically, “AI systems and models specifically developed and put into service for the sole purpose of scientific research and development.”

Regulators and medtech titans alike remain confused about the exact and definite parameters of the Act. According to Nature Medicine: “​​Due to the horizontal nature, wide scope, and rapidly changing nature of healthcare AI, there will be many problems created by ambiguities and uncertain intersections with existing laws. Here, flexibility and intelligence of the responsible bodies, including the Commission, the EU AI Office, the committees responsible for creating guidelines for medical device software under the MDR, and even of the courts will be essential to meeting the stated aim of the AI Act, to promote rather than to decimate AI sector entrepreneurial activity and innovation.”

Though, this confusion amongst healthcare professionals regarding the proposed broad-based EU AI Act isn’t new. Many experts and industry participants believe that instead of a one-size-fits-all approach, the existing sector-specific regulations—namely, the Medical Device Regulation (MDR) and the In Vitro Diagnostic Regulation (IVDR)—could be altered to address the nuances of AI technologies. 

Currently, these sector-specific laws classify AI-powered software as a type of medical device if it is designed to assist in making medical decisions. This includes functions related to diagnosing, preventing, monitoring, predicting, prognosing, treating, or alleviating health conditions, as well as supporting lifestyle changes related to health.

Despite the law being finalized, numerous obscurities and potential conflicts with existing legislation persist. Ultimately, the Commission must ensure the act succeeds in safeguarding ethical use of the technology while promoting, not hindering or complicating, healthcare innovation.

Implications of the Act’s Language

In recent years, the term “artificial intelligence” has become widely used to label technologies, even when the software in question consists of static, non-adaptive algorithms. Manufacturers will need to assess whether their software qualifies as an AI system under the Act, which defines an AI system as:

“A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

The use of “infer” is somewhat vague within its given context, but the Act’s introductory sections serve as guidance for its interpretation. They state that it does not cover systems that operate based on rules created exclusively by humans for automatic execution. Essentially, software consisting of rigid, rules-based algorithms is excluded from the Act’s scope. However, systems that involve more advanced functions such as learning, reasoning, or modeling are likely to fall under its jurisdiction.

What technology falls into these descriptions is unclear, as the Act will not apply to many current AI solutions that function with static diagnostic algorithms, rather than having autonomous or self-learning features. However, systems with incremental learning capabilities might be subject.

This is also the reality for Class IIa (or higher) medical devices, or technology using an AI system as a safety component—which are categorized as being “high risk” by the Act. This classification also encompasses a myriad of healthcare AI systems, whether or not they are “medical devices,” including AI systems utilized by government bodies to assess individuals’ eligibility for critical public services, as well as those employed in emergency healthcare settings for patient triage. Further, “high-risk” AI systems must meet a range of additional stipulations, many of which mirror the stringent standards already established under the MDR and IVDR.

MDR and IVDR vs. the EU AI Act

The Act incorporates many provisions that echo those found in the MDR and EU IVDR, such as the mandates for a quality management system, technical documentation, and usage instructions. However, medtech manufacturers who have already achieved certification under the EU MDR and IVDR will likely need to update their technical documentation to align with the new stipulations of the EU AI Act.

In addition to existing EU MDR and IVDR requirements, the Act introduces several new stipulations for AI systems, including:

  • Governance and data management protocols for training and testing datasets
  • Enhanced record-keeping practices, such as automatic event logging throughout the system’s lifecycle
  • Design transparency to ensure that users can understand and properly utilize the system’s outputs
  • Requirements for human oversight in design
  • Standards for accuracy and cybersecurity

Despite efforts by lawmakers to harmonize overlapping regulatory frameworks, numerous uncertainties persist. For instance, the interaction between the AI Act’s framework for substantial modifications and the modification regulations under the MDR and IVDR remains unclear. Additionally, it is uncertain whether devices undergoing trials, such as performance evaluations or clinical investigations, will require certification under the AI Act before they can be used in these trials.

The bottom line: the coming months and years will hopefully clarify the discrepancies between the EU AI Act, MDR, and IVDR. So how will medtech distributors effectively release an AI system that adheres to all of three of these regulations? The answer: regulatory sandboxes.

The EU AI Act envisages setting up “coordinated AI ‘regulatory sandboxes‘ to foster innovation in artificial intelligence (AI) across the EU.” These “sandboxes” enable companies to test and develop new and creative products, services, or business models under the oversight of regulators. In other words, innovators have a controlled setting to experiment with their ideas, help regulators gain insights into emerging technologies, and ultimately enhance consumer options. Nevertheless, regulatory sandboxes carry the potential for misuse or exploitation and require a robust legal structure to ensure their effectiveness.

“Regulatory overlap with the MDR and IVDR is likely to persist in the finalized AI Act text. However thorough preparation by medtech companies and the use of regulatory sandboxes could help address the need for dual conformity.”

– Medtech Insight | EU AI Act Regulatory Overlap “Likely to Persist”: Expert Presents Solutions for Dual Conformity

Keeping Tabs on the Intersection of AI and Healthcare

It’s becoming clear for medtech leaders, providers, and clients that the future of healthcare will be AI-centric. For C-suite executives looking to keep informed on how new regulations—abroad and domestically—are shaping the industry, a market intelligence platform that aggregates the most critical insights is a prerequisite to staying ahead of competitors. 

AlphaSense equips you with an extensive universe of content layered with AI search technology. Access thousands of premium, public, private, and proprietary content sources—including broker research, earnings calls, expert calls, regulatory filings, and more—in seconds, for the most comprehensive due diligence and informed decision-making.

Eliminate noise and surface the intelligence you need—start your free trial of AlphaSense today.

ABOUT THE AUTHOR
Tim Hafke
Tim Hafke
Content Marketing Specialist

Formerly a writer for publications and startups, Tim Hafke is a Content Marketing Specialist at AlphaSense. His prior experience includes developing content for healthcare companies serving marginalized communities.

Read all posts written by Tim Hafke