Blog

AI Regulatory and Governance Frameworks for Medical Devices

April 2, 2026 John O'Gorman

No video selected

Select a video type in the sidebar.

AI Regulatory and Governance Frameworks for Medical Devices

This extract comes from Beyond the Hype: AI in Medical Devices | Regulatory Guidance and Best Practices, the second booklet in our recent Regulatory Booklet Series. The series explores the core regulatory requirements medical device companies need to understand when developing new products.

Navigating regulations for AI in medical devices can feel like a maze, but getting a clear understanding upfront is essential for designing systems that are both safe and compliant. Knowing the rules helps companies plan from day one, build robust processes, and anticipate regulatory expectations 

Even consumer devices, like smartwatches, can provide valuable health data — but turning that data into medical-grade insights requires careful design: 

  • Accurate, reliable sensors: the hardware must consistently measure what it claims 
  • Self-monitoring and error detection: devices should flag failures or anomalies to ensure system reliability 
  • Clear and narrow intended use: defining exactly what the AI is meant to do reduces risk and simplifies regulatory compliance

Clear boundaries, ongoing monitoring, and robust validation are critical to ensure safety, reliability, and regulatory adherence. 

 

EU AI Act (Binding regulation)  

The EU AI Act is a formal law, not just guidance, and most healthcare AI will be considered high-risk. This means companies need to:  

  • Build a risk management plan for the AI system  

  • Ensure data quality, security, and integrity  

  • Keep clear technical documentation of how the AI works  

  • Make sure humans can oversee and intervene if needed 

 

Think of it as a checklist to show regulators your system is safe and trustworthy. It’s also important to note that the EU AI Act does not exist in isolation. Its introduction has implications across other EU regulatory frameworks, particularly GDPR and data governance, and the EU is actively working through how these overlaps should be managed. As a result, the regulatory landscape is in flux and likely to continue evolving.  

For example, on 19 November 2025, the European Commission released its Digital Omnibus package proposing amendments to the AI Act, GDPR, and the Data Act to address concerns around regulatory fragmentation and overregulation. This was followed on 16 December 2025 by a proposal to simplify the Medical Device Regulation (MDR), including changes that would largely remove medical devices from direct regulation under the AI Act, with the MDR remaining the primary framework supplemented by additional AI-specific checks. Given this pace of change, staying up to date with regulatory developments is critical. 

 

ISO/IEC standards (Guidance, voluntary)  

These standards aren’t legally required, but following them makes life easier and demonstrates best practices:  

  • ISO/IEC 23894 – AI risk management: Shows how to identify, evaluate, and reduce AI risks, feeding into patient safety processes  

  • ISO/IEC 42001 – AI quality management: Offers a framework to run a quality system for AI, covering operational management, performance tracking, and continuous improvement  

  • ISO/IEC 5259 – Artificial intelligence — Data quality for analytics and machine learning (ML): Provides foundational concepts, terminology, and examples for assessing and improving data quality across the data lifecycle, supporting reliable analytics and ML outcomes  


Applying these standards alongside your existing ISO 13485 processes helps keep AI development safe, structured, and traceable.  

 

FDA guidance for AI/ML in SaMD (Existing frameworks)  

AI/ML can transform medical devices, but introduces new complexity across the device lifecycle. The FDA regulates AI-enabled SaMD through existing pathways (510(k), De Novo, PMA) and provides guidance to address AI-specific challenges.  

Key principles:  

  • Lifecycle management: Plan for design, updates, monitoring, and maintenance. 

  • Good Machine Learning Practice: Ensure robust data, model performance, and risk management. 

  • Predetermined Change Control Plans: Define how AI may evolve safely over time. 

  • Transparency & oversight: Clearly communicate functionality, limitations, and support human intervention. 

  • Risk-based review: FDA focuses on changes that impact patient safety or device performance. 

  • Success depends on strong lifecycle management, data governance, and proactive planning for change.  

 

HTI-1 Final Rule (Binding U.S. regulation)  

The HTI-1 rule applies to AI in healthcare IT in the U.S. It focuses on transparency, safety, and fairness in predictive algorithms and clinical decision support tools.  

Developers need to ensure their AI meets FAVES: 

  • Fair: No bias  

  • Appropriate: Matches intended use and population  

  • Valid: Accurate results  

  • Effective: Works in real-world conditions  

  • Safe: Benefits outweigh risks  

 

Predetermined Change Control Plans (PCCPs) (Regulatory expectation)   

Predetermined Change Control Plans (PCCPs) are a regulatory expectation from the FDA for AI/ML-enabled medical devices. They provide a structured plan for how an AI system may evolve or adapt over time while remaining safe and effective. A key benefit of PCCPs is that they allow companies to implement updates efficiently — changes that fall within the plan do not require a new FDA submission, reducing regulatory burden while maintaining safety and performance.  

Key points for companies:  

  • Define the scope of changes your AI/ML system can undergo without requiring a new regulatory submission  

  • Document validation and verification processes for each type of change  

  • Ensure ongoing safety and performance through monitoring of model updates  

  • Integrate PCCPs with quality and risk management systems to maintain compliance  


In short, PCCPs let regulators and companies anticipate, control, and safely manage adaptive changes to AI systems, keeping patient safety and regulatory obligations at the forefront.  

 

This extract comes from Beyond the Hype: AI in Medical Devices | Regulatory Guidance and Best Practices, the second booklet in our recent Regulatory Booklet Series. The series explores the core regulatory requirements medical device companies need to understand when developing new products.