The PDR logo
Mar 31. 2026

AI in Healthcare: When Innovation Moves Faster Than Governance

By Jarred Evans, Director at PDR and Seetal Sall, Point of Care Testing (PoCT) Lead, Cardiff & Vale University Health Board.

Artificial intelligence is no longer an emerging technology in healthcare — it is embedded, ambient and increasingly invisible. From workflow optimisation and decision support to middleware, reporting layers and user interfaces, AI and Large Language Models (LLM’s) are increasingly being woven into systems that clinicians and healthcare organisations rely on every day.

Much of this software does not present itself as a medical device. Much of it may never be classified as one and yet, it increasingly touches clinical workflows, patient data, and — directly or indirectly — patient outcomes.

This creates a growing tension for healthcare providers: how to safely harness the efficiencies and benefits of AI, while ensuring that patient safety, data integrity and clinical accountability remain paramount.

Recent demonstrations of AI-enabled features embedded within diagnostic middleware and operational systems have brought this tension into sharp focus. They raise questions that many organisations are only just beginning to grapple with.

The Core Issue: Probabilistic Systems in Deterministic Environments

Large language models and other generative AI systems are probabilistic by design. They generate outputs based on likelihood, not certainty. This makes them powerful at summarisation, pattern recognition and natural language interaction — but also prone to producing confident, plausible and incorrect outputs, often referred to as “hallucinations”.

In a consumer context, this may be inconvenient or misleading but in a healthcare context, it can be unsafe. The challenge is not simply that AI can be wrong — clinicians deal with uncertainty all the time — but that:

  • Errors may be non-obvious
  • Failures may be inconsistent and difficult to reproduce
  • Outputs may be hard to audit retrospectively
  • Responsibility and liability may be unclear

Traditional healthcare governance frameworks are built around deterministic systems: validated algorithms, traceable logic, defined failure modes. Generative AI does not fail in the same way — and our existing checks and balances are not yet well aligned to this reality

Falling Between the Cracks: Software That Isn’t “a Medical Device”

A particularly acute risk sits with AI-enabled software that:

  • Is not marketed as Software as a Medical Device (SaMD)
  • Is positioned often as “decision support” or “workflow assistance”
  • Typically sits upstream or downstream of clinical decision-making
  • Is embedded within middleware, dashboards or reporting tools

These systems may influence what information a clinician sees first and potentially how results are summarised or prioritised. This in turn may drive which actions are suggested, deprioritised or escalated.

While the potential impact of these systems can be profound current healthcare procurement processes rarely demand sufficient disclosure of aspects such as model architecture or limitations, training data provenance, data governance or how AI outputs are validated constrained or overridden

The Governance Gap: Liability, Accountability and Trust

In the UK, the allocation of legal and professional liability for AI-derived patient harm remains largely untested. If an AI-generated insight contributes to an adverse event:

  • Who is responsible — the clinician, the organisation, the supplier, or the model?
  • Was the output advisory, assistive or influential?
  • Was the system transparent enough for meaningful human oversight?

Healthcare providers cannot afford to discover the answers to these questions retrospectively, through incidents and a structured, informed and cautious approach to AI adoption is not anti-innovation — it is pro-patient safety.

What Good Practice Might Look Like for Healthcare Providers

Healthcare organisations should treat AI-enabled software — whether regulated or not — as clinically adjacent technology and apply proportionate scrutiny. Good practice includes:

1. Explicit AI Disclosure Requirements
Procurement processes should require suppliers to clearly state: Whether AI/LLMs are used Where they sit in the workflow What decisions they influence.

2. Defined Intended Use and Boundaries
AI functionality should have a clearly articulated purpose — and explicit exclusions.

3. Human-in-the-Loop by Design
AI outputs should support, not replace, human judgement — with clear opportunities for challenge and override.

4. Auditability and Traceability
Outputs must be logged, explainable at an appropriate level, and reviewable after the fact.

5. Data Governance and Information Risk Assessment
Training data, inference data flows, storage and retention must align with information governance expectations.

6. Ongoing Oversight, Not One-Off Approval
AI systems evolve. Governance must account for updates, retraining and drift over time.

Organisations such as NHS are increasingly aware that AI governance must extend beyond traditional medical device frameworks and into digital, operational and diagnostic infrastructure

For developers and vendors, it seems trust will be a key differentiator recognising that healthcare providers are not asking for less innovation — they are asking for responsible innovation. This may mean:

  • Designing AI features with safety constraints, not just capability
  • Being transparent about limitations, uncertainty and failure modes
  • Avoiding “black box” positioning in safety-critical contexts
  • Aligning with healthcare governance expectations, not bypassing them (this is a crucial one)
  • Supporting providers with documentation suitable for clinical risk assessment

A key message here for me is that suppliers should not assume that avoiding medical device classification removes the obligation to design for safety, auditability and accountability. AI offers real and transformative potential for healthcare — from reducing administrative burden to improving consistency, access and efficiency but its integration into clinical ecosystems must be deliberate, governed and patient-centred.

The challenge now is not whether AI will be used, but how responsibly. It needs a coordinated approach. One that brings together clinicians, informatics, governance teams, regulators and industry. Without it, we risk embedding powerful probabilistic systems into healthcare by default rather than by design.