Pharma 5.0

FDA and EMA issue guiding principles for AI in drug development

Published: 16-Jan-2026

The set of ten principles gives broad guidance on AI use in evidence generation and monitoring across all phases of a medicine, from early research and clinical trials to manufacturing and safety monitoring

The European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) have jointly agreed on ten principles for good artificial intelligence (AI) practice across the medicines lifecycle, marking a significant step towards international regulatory alignment in this rapidly evolving area.

The principles provide broad guidance on the use of AI in evidence generation and monitoring, spanning early-stage research and clinical development through to manufacturing, quality assurance and post-market safety surveillance.

They apply to medicine developers as well as marketing authorisation applicants and holders and are intended to underpin future, more detailed AI guidance in both jurisdictions.


AI adoption across the pharmaceutical sector has accelerated in recent years, driven by its potential to improve efficiency, enhance decision-making and shorten development timelines.

However, regulators stress that realising these benefits depends on robust governance and risk mitigation.


Central to the new framework is a principles-based approach designed to balance innovation with patient safety, data integrity and regulatory compliance as AI technologies continue to evolve.

The ten principles are tailored specifically to the drug development cycle and emphasise a human-centric, risk-based approach underpinned by multidisciplinary expertise.

Developers are encouraged to maintain strong data governance, including detailed, traceable documentation of training data sources and processing steps, to ensure alignment with Good Practice (GxP) requirements.

Clarity is another key theme. AI tools should have a clearly defined context of use, with outputs that are accessible, relevant and easily understood by end users.

Performance assessments should be risk-based and consider the complete system, including human-AI interactions, using metrics appropriate to the specific application.

Lifecycle management also features prominently, with regulators recommending regular re-evaluation of AI systems to support ongoing monitoring, troubleshooting and continuous improvement throughout their use in manufacturing and beyond.

The initiative builds on discussions from the FDA-EU bilateral meeting held in April 2024.

With ethics at the forefront, both agencies have signalled their intention to continue working towards global convergence on AI governance to support responsible pharmaceutical innovation worldwide.

The guidelines are available at both the FDA and EMA's respective websites.

Trending Articles

  1. You need to be a subscriber to read this article.
    Click here to find out more.

You may also like