A recent survey from Klick Health and Momentum Events found that the majority (65%) of pharmaceutical marketing and promotional review industry professionals in the US do not trust AI for creating regulatory compliance submissions.
Top concerns included hallucinations (40%); lack of traceability/audit trail (20%) and lack of transparency/explainability (12.5%).
"These findings reinforce the critical need for healthcare marketing compliance tools that provide both visibility and accountability in their decision-making," according to Alfred Whitehead, EVP, Applied Sciences at Klick.
More survey highlights include the following:
- Half of the participants said they trusted AI for conducting reviews of materials — suggesting a greater unwillingness to use AI for higher-risk, sensitive tasks, such as creating regulatory review packages and submissions
- More than a third (37.5%) of those polled said their companies review more than 100 assets each business quarter, while less than than one-fifth (15%) said they review fewer than 25 assets per quarter
- Fifty-five per cent said their organisations are currently exploring AI for review purposes, while one-quarter said they are in the pilot phase. Only five per cent said they have partially deployed, two-and-a-half per cent said they have fully deployed and 12.5% revealed they are not considering AI for this area
- More than one-third (35%) of survey respondents described their primary role in the review process as MLR Operations/Management; 30% in Regulatory; 15% in Marketing; 12.5% in Medical, Medical Affairs, or Medical Communications; five per cent in Legal and two-and-a-half per cent in Technology.
The Klick/Momentum study echoes findings from the 2025 Trust in AI survey by KPMG and the University of Melbourne, which reported the number of people worried about AI tools spiked from 49% in 2022 to 62% in 2024.
More than half (56%) said using AI had led to errors in their work, likely from relying on incorrect or hallucinated AI-generated content from generative AI tools, or from misinterpreting AI responses.
"Many employees report inappropriate, complacent and non-transparent use of AI in their work, contravening policies and resulting in errors and dependency," the study cited.
There are alternatives to "black-box" AI systems — which give answers without explaining how they reach a conclusion, thus making it difficult to validate decisions — called glass-box systems.
Julie Turnbull, SVP, Science + Regulatory, said: "There are glass-box AI tools, built with expert human medical and regulatory decision intelligence that offer users both visibility into their workflows and reasoning behind their outputs."
"They are well-suited for high-risk, sensitive tasks such as healthcare marketing regulatory submissions.”
As an example, Turnbull pointed to Klick Guardrail, a glass-box AI tool that goes beyond offering simple predictive suggestions by applying complex reasoning and a proven decision engine that shows its logic.
The tool, purposely built for the healthcare marketing industry, also provides traceability to help ensure pharma marketing compliance while navigating regulatory frameworks more intelligently and efficiently.
Klick Guardrail is in pilot programmes with multiple pharma and biotech companies and is currently being used internally at the agency to enhance asset authoring, review and Medical, Legal and Regulatory (MLR) processes.
BJ Jones, Chief Commercial Officer, NewAmsterdam Pharma, said: "The Klick Guardrail system stands out because it makes responsible AI not just possible, but practical."
"It was heartening to see such a thoughtful approach to ensuring safety and transparency built right into the foundation of how these tools are used to support accountability, auditing and trust."
