AI in Medical Devices: What the EU AI Act Means for SaMD Developers in 2026
For years, the compliance question for AI-powered Software as a Medical Device was relatively straightforward: get CE marking under EU MDR or IVDR. That is no longer sufficient.
The EU AI Act (Regulation (EU) 2024/1689) entered into force on August 1, 2024. Its high-risk AI obligations apply from August 2026 for new products, and August 2027 for existing CE-marked products. For any SaMD that includes AI or machine learning components, this introduces a second, overlapping regulatory framework on top of MDR and IVDR.
This guide explains what the AI Act requires from SaMD developers, how it interacts with existing EU medical device regulations, where your current IEC 62304 documentation falls short, and what your team should be doing right now.
Who this applies to If you are developing or have already CE-marked AI-powered SaMD for the EU market — including diagnostic support tools, clinical decision support software, remote patient monitoring, or any software that uses machine learning to generate a clinical output — the EU AI Act applies to you. The Act has extra-territorial reach: if your product is used in the EU, you are in scope regardless of where your company is based. |
What is the EU AI Act and how does it differ from what you already comply with
The EU AI Act is the world's first comprehensive regulatory framework specifically for artificial intelligence. It establishes a risk-based classification system with four tiers: prohibited practices, high-risk AI, limited-risk AI, and minimal-risk AI.
Medical AI almost universally falls into the high-risk category. Under Article 6(1)(b) of the Act, any AI system that is a component of a medical device regulated under MDR or IVDR and requires a Notified Body conformity assessment is automatically classified as high-risk AI. This is not a judgment call. If your SaMD requires Notified Body review, it is high-risk AI.
The key difference from what you already do under MDR and IEC 62304:
→ MDR and IEC 62304: govern how you build safe, functional software for a medical device. They focus on the development process, risk management, and clinical performance.
→ The EU AI Act: adds requirements specific to AI systems: data governance, algorithmic transparency, bias testing, human oversight architecture, and AI-specific post-market monitoring.
The two frameworks are designed to be complementary, not competing. But compliance with one does not imply compliance with the other. There are requirements in the AI Act — particularly around training data governance and explainability — that have no direct equivalent in MDR or IEC 62304.
The compliance timeline — including the detail most teams are missing
The AI Act is being phased in progressively. Here is the complete timeline relevant to medical device AI:
Date | Milestone | What it means for SaMD teams |
|---|---|---|
August 2024 | AI Act enters into force | The regulation is law. Compliance obligations begin phasing in. |
February 2025 | Prohibited AI practices apply | Bans on unacceptable-risk AI (e.g. manipulation, social scoring). Unlikely to affect SaMD directly. |
August 2025 | GPAI model obligations apply | If your SaMD is built on a foundation model (GPT-based, etc.), provider obligations now apply. |
August 2026 | High-risk AI obligations apply — new products | AI-powered SaMD placed on the EU market from this date must fully comply. CE marking alone is no longer sufficient. |
August 2027 | High-risk AI obligations apply — existing products | Products already on the market before August 2026 that fall under MDR/IVDR third-party conformity assessment must be fully compliant by this date. |
The detail most teams are missing August 2026 is not a universal deadline. New products placed on the EU market from that date must comply immediately. But products already on the market before August 2026 that require MDR/IVDR third-party conformity assessment — which covers most Class IIb and Class III devices — have until August 2027. This distinction matters for prioritization, but it does not mean existing products can wait. Notified Bodies are already beginning to incorporate AI Act considerations into their MDR assessments. |
What high-risk AI classification requires: the 8 obligation areas
Providers of high-risk AI systems (which includes SaMD developers) must meet requirements across eight areas under the Act. Here is what each means in practice for a medical device team:
1. AI quality management system (Article 17)
High-risk AI providers must maintain a quality management system that covers the AI lifecycle — from data governance and model development through deployment and monitoring. The good news: if you already operate under ISO 13485, you do not need a parallel AI QMS. The Act explicitly allows AI QMS requirements to be incorporated into your existing medical device quality system. You will need to extend your QMS to cover the AI-specific areas below, but you are not starting from scratch.
2. Technical documentation (Article 18 and Annex IV)
The AI Act requires a technical documentation file for the AI system. For medical devices subject to MDR or IVDR, this can be integrated into your existing technical file or Design History File rather than maintained as a separate document set. The AI Act technical documentation must cover:
- A general description of the AI system and its intended purpose
- A detailed description of the system's elements and its development process
- Information on training, validation, and testing data and methodology
- Monitoring, functioning, and control measures
- A description of the changes made through the lifetime of the system
The critical new addition here is the training and testing data documentation. This is not covered by your current DHF and will require new artifacts.
3. Data governance and training data requirements (Article 10)
This is the area where SaMD developers most commonly find gaps against their existing IEC 62304 documentation. The Act requires that training, validation, and testing datasets:
- Are subject to documented data governance practices
- Are relevant, representative, and free of errors to the extent possible
- Are examined for potential biases that could lead to health risks or discrimination
- Cover the geographic, behavioral, or functional settings in which the system is intended to be used
In practice, this means you need to document your training data sources, versioning, preprocessing decisions, and the bias analysis methodology you applied — before deployment, not retrospectively.
4. Human oversight (Article 14)
High-risk AI systems must be designed so that human operators can effectively oversee them. This is not a labeling or instruction-for-use requirement — it must be built into the product architecture. The Act requires that the system includes the ability for operators to interpret outputs, override or interrupt the system, and understand when output reliability may be limited.
For clinical decision support AI, this means the product design must actively support human judgment rather than presenting AI outputs as definitive conclusions. How oversight is implemented needs to be documented in your technical file.
5. Transparency and provision of information to deployers (Article 13)
High-risk AI systems must be sufficiently transparent that deployers — hospitals, clinicians, healthcare institutions — can understand the system's capabilities and limitations. The Act requires documentation covering intended purpose, performance levels, accuracy metrics, and known limitations.
For AI in medical devices, this overlaps with your Instructions for Use requirements under MDR — but the AI Act goes further by requiring documentation of the conditions under which the system may be expected to fail or produce unreliable outputs.
6. Robustness, accuracy, and cybersecurity (Article 15)
High-risk AI systems must achieve an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle. For medical AI, this includes resilience against adversarial inputs and the ability to detect out-of-distribution inputs that could affect model reliability. This intersects with FDA's SBOM requirements and cybersecurity guidance for cyber devices — if you are building for both markets, your cybersecurity architecture needs to satisfy both frameworks.
7. Post-market monitoring (Article 61)
AI Act post-market monitoring goes beyond the PMS plan required under MDR. It must specifically track AI performance metrics over time, including the detection of model drift — the gradual degradation in model performance as real-world data distributions shift away from training data. Your PMS plan needs AI-specific monitoring indicators, not just general clinical performance metrics.
8. Registration in the EU AI database
Providers of high-risk AI systems must register their product in the EU AI database before placing it on the market. A similar database to EUDAMED — it is not yet publicly accessible as of early 2026, but registration will be required. This is an administrative step but one that requires planning, as your technical documentation must be ready to support registration.
How the AI Act maps to your existing MDR and IEC 62304 documentation
The following table summarizes where your existing compliance documentation satisfies AI Act requirements, and where it does not. The right column identifies genuine gaps that need new work.
Requirement area | Addressed by MDR / IEC 62304 | Additional AI Act obligation |
|---|---|---|
Risk management | ISO 14971 required | AI-specific risk categories: bias, drift, opacity |
Technical documentation | MDR Annex II / DHF | AI Act Annex IV — can be integrated into MDR tech file |
Quality management | ISO 13485 required | AI QMS elements (Art. 17) — can extend existing QMS |
Post-market surveillance | MDR PMS plan required | AI performance monitoring, model drift tracking required |
Conformity assessment | Notified Body review (Class IIb/III) | AI Act assessment — can be covered by same NB |
Data governance | Not explicitly required | Training data representativeness, bias testing (Art. 10) |
Human oversight | Addressed through labeling / IFU | Must be designed into the product architecture (Art. 14) |
Transparency | Performance claims in labeling | Algorithmic explainability documentation required |
What SaMD developers should be doing right now
If your product is AI-powered and targets the EU market, the following steps are not optional — they are the minimum viable compliance preparation for August 2026 and August 2027.
Step 1: Classify every AI component in your product
Not all software that uses statistics or rules-based logic qualifies as an "AI system" under the Act's definition. The EU AI Act defines an AI system as a machine-based system that infers from inputs how to generate outputs — predictions, recommendations, decisions — that can influence real or virtual environments.
Walk through your product and identify every component that meets this definition. Some components you may have thought of as AI may fall outside the Act's scope. Others you might not have flagged may be in scope. Classify before you plan.
Step 2: Conduct a gap assessment against AI Act Annex IV
Map your existing technical file and DHF against the AI Act's Annex IV documentation requirements. The gaps for most teams are concentrated in: training data documentation, bias analysis records, and human oversight architecture documentation. These are the areas that require the most lead time to address properly.
Step 3: Extend your ISO 13485 QMS to cover AI Act Article 17 requirements
You do not need a separate AI QMS. You do need to extend your existing quality system to explicitly cover: data governance processes, AI model validation procedures, post-market AI performance monitoring, and change control for model updates. This is procedural work that can be done systematically — but it takes time and needs to happen before your next conformity assessment.
Step 4: Update your PMS plan with AI-specific monitoring metrics
Your current PMS plan almost certainly does not include model drift detection, subpopulation performance monitoring, or AI-specific incident reporting thresholds. These need to be added. If your product is already on the market, this is the highest-urgency item — post-market monitoring for AI must be active, not retrospective.
Step 5: Engage your Notified Body early
Notified Bodies designated under MDR are beginning to incorporate AI Act compliance verification into their assessments. How they will conduct this in practice is still being defined — which is exactly why early engagement matters. Contact your Notified Body now to understand their current approach to AI Act integration and whether your planned timeline is realistic given their capacity.
A note on FDA alignment If you are developing for both the EU and US markets, there is meaningful overlap between the EU AI Act's requirements and FDA's emerging AI/ML SaMD guidance — particularly on Predetermined Change Control Plans (PCCP), bias analysis, and post-market performance monitoring. Building documentation that satisfies both frameworks from the start is more efficient than treating them as separate compliance tracks. Talk to your regulatory team early about a dual-market documentation strategy. |
The window to prepare is closing
The EU AI Act is not a future compliance horizon for SaMD developers. For new products, August 2026 is four months away. For existing CE-marked products, August 2027 provides more runway — but Notified Bodies are already incorporating AI Act considerations into their MDR assessments today.
The teams that will move smoothly through this transition are the ones doing gap assessments and documentation planning now, not the ones scrambling to retrofit compliance into a product that is already on the market.
Need help navigating compliance for your SaMD? → Get in touch: maria@hattrick-it.com |