Gartner has recently noted that AI is nearing the Trough of Disillusionment — the stage where the initial excitement fades and the technology is finally judged on its ability to deliver. This is the hard part. Moving from hype to meaningful impact is rarely smooth. Missteps, unforeseen challenges, and difficult trade-offs are inevitable as we work to make AI genuinely useful and productive, particularly when we think about connected medical devices, where integration, reliability, and patient safety add extra layers of complexity. In healthcare, where stakes are high and resources scarce, understanding how AI creates value — and where it can introduce new risks — is essential.
This article looks at how AI is being used in healthcare, from improving core tasks like medical imaging to reshaping support activities and resource use. Using examples from the NHS, we explore both the efficiency gains and the challenges, including changes to quality control and clinician behavior. Finally, we highlight why careful monitoring and thoughtful implementation are critical as AI moves from hype to real-world impact.
A model of AI use in healthcare
As AI is increasingly applied in healthcare, we need to have a clear understanding of the gains it can deliver. Any useful model should consider the primary task being performed, the support tasks required to make it work, and the effect on the wider healthcare system.
These three elements can be described as:
- Core task: A key medical activity carried out during patient treatment. These are typically activities too complex for traditional technology but feasible for AI. The most common application of AI in healthcare is reading medical images.
- Support task: The tasks needed to complete the core activity, such as quality control checks or communication with other clinicians.
- Scarce resource: The limited resources needed to carry out these activities, such as doctors, specialized staff, or technology.
The AI agent operates on the core task to free up scarce resources, improve efficiency, and deliver measurable gains. However, secondary tasks can reduce these gains if the core task and scarce resources need to be reconfigured.
AI in the NHS
A closer look at AI in healthcare highlights both its potential and the challenges it brings. Two examples illustrate this clearly.
The first comes from Dr. Raj Jena, the UK’s first clinical professor of AI in radiation oncology. He described the impact of Osairis, a tool developed with Microsoft to plan radiotherapies for cancer patients. Osairis reduces the planning of an individual therapy from hours to minutes. By automating this core task, the AI agent significantly improves efficiency and allows clinicians to focus on higher-value work.
The second example comes from Pritesh Mistry, who discussed breast screening programs. Each screening check is reviewed by two doctors, both to examine the radiological image and to cross-check each other’s work for errors. Introducing an AI agent could replace one of these doctors, maintaining low error rates while freeing staff to see more patients. Instead of being on two doctors, quality control is now on one doctor and their AI tool.
These examples highlight a shift in responsibilities. Moving oversight from two doctors to a single doctor supported by an AI tool raises new questions about accountability, system design, and the processes needed to ensure safe, reliable outcomes.
The trough of disillusionment
This is where the challenges of AI become clear. While the core task — such as reading an image — is made more efficient, the impact on supporting processes like quality control is less certain. Tools like Osairis must be extensively tested to ensure they operate correctly, not only at the time of design but throughout their lifetime. Managing and monitoring these tools adds new costs and responsibilities.
The cost of introducing and maintaining AI must now be weighed against the traditional approach, such as two doctors performing the task. As new AI tools are deployed, this subtle shifting of roles and responsibilities will repeat again and again.
Gartner refers to this stage as the trough of disillusionment: the point where initial hype fades and the reality of shaping technology to solve real-world problems becomes evident. This is where the hard work begins—deciding how to apply AI within an existing system and figuring out how to transition to new processes without compromising care.
Changing behaviour, not just tools
The NHS examples above illustrate that introducing AI is not just about improving efficiency — it also changes processes and affects people. Quality control and the interaction between patients, clinicians, and tools must be considered carefully. Often, these components are treated as static, assumed to remain unchanged by reorganisation — but AI can alter how staff work and make decisions.
A recent article in The Lancet highlights this issue, discussing deskilling among endoscopists exposed to AI. While the study is limited, it raises important considerations. AI tools were expected to improve adenoma detection by 20%, but in practice, exposure to the tools appeared to reduce detection rates by 20%. The use of AI may unintentionally reduce attentiveness over time, showing that technology can affect clinician behaviour in ways that are hard to predict.
This is a worrying finding. It is important not to make too much of one study. The study does point to weaknesses in their current approach, how they mitigated them, and the way forward to determine if this result is generalisable.
This points to two critical questions when introducing AI in healthcare:
- Do you understand all the processes — both primary and secondary — that contribute to clinical success? This includes how tools will affect clinicians’ behaviour and how staff may adapt their approach. A deeper understanding is needed than with traditional technology deployments.
- How do you know the AI tool is performing as intended? The AI Act already mandates post-market surveillance. Like a drug, AI tools require continuous monitoring, with new processes defined to manage and protect their use in clinical settings.
These questions remain largely unanswered in current healthcare systems. While the FT article suggests AI can be introduced through standard technological change management and business process re-engineering, the Lancet study highlights that AI also transforms the clinicians who apply it.
The central challenge is managing all these changes effectively — balancing efficiency gains with safety, staff adaptation, and long-term reliability.