Skip to content

Guiding through international regulations concerning AI-integrated equipment

AI Integration in Medical Devices and Software as a Medical Device (SaMD) Outpaces Global Regulatory Oversight

Managing international guidelines for artificially intelligent gadgets
Managing international guidelines for artificially intelligent gadgets

Guiding through international regulations concerning AI-integrated equipment

The regulatory landscape for Artificial Intelligence (AI) in medical devices is evolving, adapting to the unique characteristics of AI technologies that often require comprehensive oversight.

In the United States, the Food and Drug Administration (FDA) has been at the forefront of this evolution. Devices without a predicate are granted approval through De Novo classification, a pathway designed for low to moderate risk devices with novel technologies. When assessing the safety and effectiveness of algorithms within an AI-enabled Software as a Medical Device (SaMD), the FDA considers factors including data quality, robustness, and clinical performance.

The majority of AI-enabled devices in the US get to market via the 510(k) pathway, where applicants need to prove their device is substantially equivalent to an already FDA-authorized device (a predicate device). However, the FDA's regulation of AI-enabled SaMD is still evolving, with efforts focused on balancing safety, performance, and innovation.

In January 2025, the FDA issued draft guidance entitled "Artificial Intelligence-Enabled Device Software Functions: Lifecycle management and marketing submissions recommendations". This guidance aims to address continuous algorithm changes post-approval, typically managed via predetermined change control plans (PCCPs) to avoid repeated premarket approvals unless there is a significant risk.

Across the globe, regulatory challenges persist. Existing regulatory frameworks are outdated for AI autonomy and adaptability. In Europe, AI-enabled SaMD is considered high-risk, triggering obligations under both the AI Act and Medical Device Regulation (MDR) or In Vitro Diagnostic Regulation (IVDR). This means dual conformity assessments for clinical safety and for AI-specific criteria like data governance, transparency, risk management, and human oversight, increasing complexity and documentation burden on manufacturers.

Emerging regulatory trends include a transition from static approval models to lifecycle and iterative regulatory frameworks that accommodate AI continuous learning and updates. There is also an increased emphasis on risk management systems tailored for AI, including human oversight considerations to mitigate potential harm from autonomous decisions.

As of October 2024, 22 low to moderate risk devices received approvals via the De Novo pathway. The first AI-enabled medical device, an automatic interactive gynaecological instrument for analyzing Papanicolau (PAP) cervical smears, was approved by the FDA in 1995. If an AI-enabled device makes specific recommendations around a diagnosis or treatment, it falls under FDA regulations. However, AI software tools intended to assist with administrative tasks such as scheduling, inventory, or financial management are exempt from FDA regulations.

If adaptive AI is deployed within SaMD for clinical applications, developers, engineers, and regulators must carefully consider what data the algorithm will have access to for continued learning. Only four devices required the most rigorous pathway of premarket approval as high-risk devices. Quality system and post-market requirements, including adverse event reporting, apply to AI-enabled devices.

AI in medical devices and SaMD must be sufficiently advantageous and cost-effective to be successful on the market. The current regulatory pathways may be stifling innovation to adopt more AI within medical devices. Stakeholders including AMA emphasize the necessity for a whole-government, coordinated, and transparent regulatory ecosystem that provides consistency and clarity for developers, clinicians, and patients. Fragmentation of rules across federal and state levels risks slowing innovation and creating safety gaps.

In summary, the global regulatory landscape for AI-enabled SaMD is shifting towards more agile, risk-based, and coordinated frameworks, adapting to the unique characteristics of AI technologies that evolve post-market and require comprehensive oversight of data integrity, transparency, human oversight, and clinical performance to ensure patient safety and trust.

Read also:

Latest