Having signed off 1000’s of job bags under the ABPI Code, I see AI tools arriving in MLR not as a binary threat but as a governance challenge. In this short briefing I set out the precise controls I would insist on if I were implementing an AI-enabled MLR platform today — practical, clause-led and focused on defensibility, not hype.
TL;DR: AI can draft and structure content but does not replace human substantiation or certification under the ABPI Code. Prioritise governance: defined intended use, mandatory human sign-off, evidence verification, licence checks and strict UK localisation.
1. Issue summary — what we actually face
ABPI Code AI is arriving in everyday Medical affairs MLR
AI-assisted drafting is already slipping into Medical affairs MLR workflows. I see it used to refresh congress slides, draft HCP email summaries, build claims comparison tables, and shape medical information responses. In an AI-enabled regulatory strategy, that sounds efficient—and it can be. AI is good at structure, tone, and speed.
The real risk: polished drafts that drift outside the pathway
The compliance problem is rarely “the model hallucinated.” The practical exposure is process drift: a draft looks finished, gets shared informally, and someone treats it as “only a draft” until it quietly becomes the version that’s used. That’s where ABPI Code AI risk sits—not in the technology, but in how outputs enter (or bypass) the job bag.
Dr Anzal Qurbain: “The weakest link in AI MLR is not the model — it is how an output enters the job bag.”
The ABPI tests don’t change (Clause 1.17, Clause 2, Clause 6)
Even if AI wrote the first 90%, the questions stay the same under the 2024 ABPI Code and related guidance (including 2025 Congresses and Events):
- Clause 2 (high standards): is the content balanced and appropriate?
- Clause 6 (accuracy & substantiation): are claims supported by robust evidence and capable of verification?
- Clause 1.17 (promotion): does the context make it promotional, even if it’s framed as “information”?
Audit trail is the difference between control and exposure
If challenged, I need evidence: the job bag, the approval inbox, and Final Signatory records showing what changed, why, and which references were checked.
2. Why it matters now — regulatory and practical context
FDA EMA AI principles and the shift to provable oversight
AI-enabled MLR is arriving at the same time regulators are tightening expectations on how AI-influenced outputs are controlled. The FDA–EMA guiding principles (14 January 2026) and the EU AI Act both push the same basics: traceability, accountability, and human oversight. For me, that makes governance the starting point—not model performance.
Dr Anzal Qurbain: “Regulators now expect demonstrable human oversight and traceability; ‘AI drafted’ is not a defence.”
AI regulatory strategy 2026: credibility, RWE, and audit trails
The FDA’s 2025 draft credibility assessment framework (and rising expectations around RWE) changes the scrutiny on anything that looks “data-driven.” If AI helps draft claims, summaries, or comparisons, I assume reviewers will ask: Who checked it, against what evidence, and where is the record? That’s why a Risk based AI approach matters—higher-risk use cases need stronger controls, clearer documentation, and tighter sign-off.
Practical pressure: congress season, global pushes, and UK localisation risk
In the real world, risk spikes during congress season and fast global content cycles. Localisation tools can speed approvals, but they can also omit UK mandatory statements (including Clause 26) or shift tone into promotion. ABPI’s 2025 Congress & Events Guidance is a reminder that “busy” is not a compliance excuse.
What I align to internally
- Documented human oversight and audit trails whenever AI influences outputs
- Early regulatory engagement and documented processes to reduce failure risk
- Governance integrated with GxP, digital quality systems, and enterprise risk management
3. ABPI Code analysis — the clauses that matter
When I map AI-enabled MLR to the core pillars regulatory strategy, I start with a simple point: the ABPI Code is technology-neutral. The 2024/2025 Code and Guidance are still the operational baseline for MLR decisions, even if an AI tool touched the draft. What changes is the need for tighter SOPs so you can Regulatory ethical compliance ensure traceability and control.
“The tests imposed by the Code do not alter because an AI assisted the draft; they remain evidentiary and human-led.”
Dr Anzal Qurbain
Clause 1.17 (and Clause 2): promotion and high standards
Clause 1.17 matters because implication can make something promotional. AI summaries often “smooth” language, and that can quietly shift tone. Clause 2 then pulls you back to high standards: polished does not mean compliant.
Clause 6: accuracy and substantiation
This is where Regulatory compliance pharmacovigilance meets day-to-day copy. Every claim must be capable of support with evidence. AI can draft, but it cannot carry evidentiary accountability. I require:
- evidence uploaded before certification
- manual reference checks against the SmPC
- clear separation of licensed vs exploratory findings
Clause 14: comparative claims
AI loves confident comparisons. Clause 14 is unforgiving: comparative claims need head-to-head data or suitable substantiation. “Best”, “better”, or even implied superiority must be justified and documented.
Clause 26: global-to-UK responsibility (plus congress materials)
Clause 26 makes the UK company responsible for UK use. AI localisation can drop mandatory statements or shift context—especially for congress assets, where the ABPI 2025 Congress & Events Guidance is often tested. Final Signatory accountability is unchanged, and your process controls must evidence the certification route was used.
4. Common pitfalls — realistic examples of process drift
In practice, Code exposure rarely starts with bad intent. It starts with stealthy process drift: an AI draft looks “finished”, moves fast, and quietly slips around the documented pathway. That’s why Pharmaceutical AI governance best practice is designed to catch informal bypasses, not just “bad outputs”.
Concept review approval stage gets skipped “just this once”
An AI tool drafts an HCP email summary. It’s polished, so it gets forwarded, then copied into a congress slide deck. Nobody uploads it to the job bag because it’s “only a draft”. A week later it’s presented externally with no traceable certification record.
Dr Anzal Qurbain: “I’ve seen polished copy go straight into presentation decks — and that is how complaints start.”
Optimised wording creates implicit superiority
I often see models “improve” language: stronger efficacy verbs, cleaner comparisons, simpler safety lines. The drift is subtle: a claim becomes comparative in tone without head-to-head evidence, or an exploratory endpoint reads like a confirmed benefit. This is where an AI credibility assessment framework must force checks for substantiation, licence alignment, and balanced safety framing.
Localisation that drops UK requirements
AI-enabled localisation can remove UK mandatory statements or shift from factual to promotional tone. Global content may be accurate, but UK context is different—and competitors often spot breaches linked to Clauses 6, 14, or 1.17.
| Common drift indicators | What PMCPA will test |
|---|---|
| Optimised efficacy phrasing; implied benefits; omitted mandatory statements | Wording, context, substantiation, audience, certification records |
Investigations rarely care that AI was involved; they care whether the approval pathway can be evidenced.
5. Practical framework — the controls I would require
If I were implementing an AI-enabled MLR tool, my focus would not be on the sophistication of the model. It would be on governance architecture: an AI governance framework pharma teams can evidence under the ABPI Code.
1) Define intended use in SOPs (AI validation GxP compliance)
I would hard-code boundaries in SOPs so “support” doesn’t become unsupervised generation. I’d specify whether the tool is allowed to:
- Draft copy only (default)
- Propose claims (only with strict controls)
- Summarise clinical data (with context rules)
- Localise global content for the UK (with local sign-off)
2) Mandatory human certification gate (no parallel pathways)
Every AI-influenced asset must enter the same job bag and follow the identical approval route as agency copy. Human sign-off and traceable records are what prevent informal circulation that increases Code exposure.
- AI output is logged into the approval system
- Final Signatory reviews the final version
- Approval inbox retains complete, searchable records
3) Substantiation controls (Data governance compliance solutions)
To meet ABPI substantiation expectations, I would require mandatory evidence upload before certification and manual verification of any references. I’d also enforce:
- UK SmPC alignment checks
- Explicit comparative review (no implied superiority)
- Licence & context checks: separate licensed claims from exploratory findings
Additional guardrails: provenance + UK localisation (Clause 26)
Dr Anzal Qurbain: “I insist on a short provenance line in every approval record — not bureaucracy, but defensibility.”
I’d capture:
- Was AI used, and for what purpose?
- Were references independently verified?
- Who reviewed/edited?
- For global-to-UK localisation: local sign-off confirming UK mandatory statements, tone, and restrictions (Clause 26)
6. For Mentors & Final Signatories — coaching to prevent slips
When AI enters MLR, I lean harder on mentors and Final Signatories. The tool may be new, but the ABPI expectations are not: accurate, balanced, within licence, substantiated, and properly certified. This is where Multidisciplinary AI governance teams help—Medical, Regulatory, Compliance, and Digital agreeing one standard way to work.
Dr Anzal Qurbain: “Coaching and structured briefing are the practical defences against process drift.”
Briefing templates that force the right checks
I use a short signatory briefing template so nobody relies on “looks good” heuristics. It should prompt:
- SmPC alignment (wording, population, endpoints, safety framing)
- Evidence uploaded and verified (no “AI-cited” references unchecked)
- Audience and whether it meets the definition of promotion
- Wording for implied superiority/comparatives
- Dissemination pathway (where it will be used, by whom, and when)
Mentor micro-sessions on AI drift patterns
Research and experience align: structured mentoring and clear checklists materially reduce informal bypasses. I run short sessions on common drift patterns—optimised efficacy language, softened risk, and “helpful” summaries that change meaning—and I insist on job-bag discipline every time.
Pre-certification checklist + traceable records
For Digital quality systems integration, I require a pre-certification checklist and retention of the approval inbox entry for audit. I also add a provenance line to support Human oversight explainability AI:
AI used? purpose? references verified by whom? edits by whom? date/time?
That documentation is what protects defensibility if a competitor files a complaint.
7. Closing reflection — balanced, not alarmist
I’m optimistic about AI in MLR, but I’m not starry-eyed. Used well, AI can improve speed and consistency in drafting, triage, and structuring content. Used badly, it simply makes existing gaps travel faster—and that’s where ABPI Code risk shows up.
For me, Pharmaceutical AI governance best practice is simple: governance must come before deployment. The UK direction of travel supports innovation, but it also expects accountability and demonstrable governance. That theme is consistent across frameworks, including the FDA–EMA guiding principles (14 Jan 2026), the FDA draft credibility framework (2025), and EU AI Act expectations. In other words, your AI regulatory strategy 2026 should be built around traceability, not model hype.
That’s also what matters in the real world. If a complaint lands, the PMCPA (or a complainant) will test your documentation and process: job bag entry, evidence uploads, licence alignment, local sign-off, and Final Signatory certification. They won’t spend time debating whether a model produced the first draft. A defensible AI-enabled regulatory strategy is one that keeps an end-to-end audit trail and adds a short provenance line showing how AI was used, what was verified, and who edited.
Dr Anzal Qurbain: “Accountability remains human under the ABPI Code — and that will not change.”
So my closing view is calm: AI is an enabler. Governance determines risk. If we embed AI into existing compliance architecture—without creating side channels—we can gain efficiency without weakening standards.
Subscribe to AskAnzal for weekly ABPI case breakdowns and Final Signatory insights.

