Regulatory bodies worldwide are making strides to define and standardise AI/ML-enabled medical devices to ensure safety and efficacy, reports Timothy Bubb, Technical Director at IMed Consultancy.
Artificial intelligence (AI) and machine learning (ML) hold tremendous potential for early disease detection, personalised treatment approaches and remote patient monitoring.
Yet, the new regulations, including the EU AI Act, add complexity and challenges, especially as digital health solutions can blur the lines between medical and non-medical tools.
Initiatives such as the US FDA’s predetermined change control plan and the UK's regulatory sandbox, by contrast, aim to provide clearer paths for regulatory compliance in this rapidly evolving space.
Digital health products or digital medical devices?
Digital health includes a range of technologies that range from non-medical wellness items to medical devices designed for specific purposes.
Mindfulness apps and sleep tracking software, for example, are usually not medical devices, but a single marketing claim change can shift a product into the regulated medical device category.
The International Medical Device Regulators Forum (IMDRF) defines “Software as a Medical Device” (SaMD) as programs intended for medical purposes that perform these functions independently of a hardware medical device.
Several guidance documents to support the interpretation of what SaMD is have already been published in the EU, UK and US.
AI-powered innovation and application of AI and ML in healthcare
As the industry embraces innovative AI-powered solutions, regulatory bodies worldwide are coming to terms with the complexities of evaluating and approving medical devices that deviate from traditional paradigms.
AI's promise in healthcare includes early disease detection, personalised treatments and enhanced remote consultations via telehealth. AI-powered algorithms can, in fact, analyse vast amounts of health data, identify disease biomarkers, predict disease trajectories and tailor interventions to individual patients.
This saves hours of manual analysis and offers interpretations that would take human reviewers years to complete.
AI and ML have the potential to revolutionise various healthcare areas, including the following:
- diagnostic imaging: AI/ML algorithms can analyse medical images such as X-rays, MRIs and CT scans to facilitate diagnosis
- remote patient monitoring: AI-powered devices can continuously collect data on vital signs and activity levels, enabling early detection and timely intervention
- personalised medicine: AI/ML algorithms can analyse patient data, including genetics, medical history and lifestyle factors to create tailored treatment plans
- clinical decision support systems (CDSS): AI/ML can assist with diagnosis, treatment planning and decision making by examining patient data
- wearable health devices: Equipped with AI/ML, these devices can monitor health parameters such as heart rate, blood pressure and sleep patterns
- robotic surgery: AI-powered robots can assist with minimally invasive procedures, enhancing precision and control
- predictive analytics for healthcare management: AI and ML models can analyse large data volumes to identify patterns and trends, improving healthcare management.
These scenarios highlight AI/ML's versatility and potential impact in medical device development and healthcare delivery.
Furthermore, in underfunded medical research areas, AI/ML can prove life-changing impacts by helping to detect environmental or genetic factors as well as comorbidities that increase disease risk. This could, potentially, warn individuals about health risks years before diseases manifest.
Understanding digital health regulations
Regulators are determined to keep pace with technological advancements while addressing concerns about data security, bias and safety impacts from poorly performing clinical software tools.
In Europe, the AI Act will regulate such systems throughout multiple industries, including medical devices and digital health software. It emphasises the importance of reliability and accuracy of AI systems in healthcare and calls out certain digital technologies as being high risk.
These include, for example, systems that are used for emergency patient triaging and/or to evaluate an individual’s eligibility for certain healthcare services. To quote: “They make decisions in very critical situations for the life and health of persons and their property.”1
As of now, AI or ML-incorporating medical device software typically falls under Class IIa or higher regulatory risk classification, requiring formal assessment. The forthcoming EU AI Act adds further scrutiny to AI-based medical devices.
In the US, the FDA has extensive experience when it comes to regulating AI/ML-enabled devices. It applies a “benefit-risk” framework to ensure they demonstrate sensitivity and specificity in terms of their analytical functionality and intended purpose.
The FDA has also introduced processes for preauthorised software changes, enabling certain updates to be done without further regulatory assessment.2
This recognises the need for some AI/ML systems to be adaptively retrained on new or context-specific data. This approach represents a significant milestone in regulatory innovation, especially if we consider traditional assessment methods that can struggle to enable similar effective and proportionate regulation.
In the UK, a new regulatory sandbox (July 2024) now provides a safe space for healthcare AI tool developers to trial products before full implementation — simplifying the confirmation of clinical efficacy and safety.3
The MHRA will also develop a guidance-based system that allows for more frequent updates to keep pace with innovation; it has, as well, updated the Software and AI as a Medical Device Change Programme objectives to make the UK a globally recognised hub for responsible innovation in medical device software.
In a bid to address bias and inequalities, the MHRA also confirms that it recognises that SaMD and AIaMD must perform across all populations within the intended use of the device and serve the needs of diverse communities.
Collaboration and expertise
As the digital health landscape continues to evolve, it becomes essential to work closely with regulatory experts and adopt multidisciplinary approaches to address the associated challenges and ensure compliance.
The complexity of AI-based medical devices and digital health products necessitates a clear understanding of whether AI/ML functionalities are core to the product or merely supplementary.
This distinction can significantly impact decisions related to product architecture, software algorithm design and regulatory evidence generation strategies.
Collaborating with professionals who have in-depth knowledge of regional regulations and the latest trends in digital health can speed up market entry and ensure adherence to evolving standards and requirements.
References
- https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence.
- www.fda.gov/regulatory-information/search-fda-guidance-documents/marketing-submission-recommendations-predetermined-change-control-plan-artificial.
- www.gov.uk/government/news/mhra-launches-ai-airlock-to-address-challenges-for-regulating-medical-devices-that-use-artificial-intelligence.