top of page

The Myth of Seamless Integration


I remember as a child first learning about the existence of Formula 1 cars.

They look unbelievably cool and can go over 200mph… whoa.

My next thought was: Surely, everybody must want one?

Why is the street full of dumpy old Ford Escorts when you could have a Formula One car- the best car in the world?


I clearly had a bad understanding of the boot space required for a supermarket shop.

But I also had a bad understanding of many other factors such as roads, comfort, money and fuel.

What was lacking can be summarised, for the purposes of an AI article, as an understanding of implementation; a topic that I have become increasingly familiar with in my research work in AI.

I have had the genuine pleasure of attending a few AI-related conferences over the last year. The gulf between vendors and clinicians on the topic of implementation is huge.


I would float around the exhibition hall and see phrases such as “…seamless integration…”, “…0% clinician effort…” and “hassle-free” used liberally on vendor stalls, then I would enter a lecture hall and listen to academic clinicians explain how difficult implementation can be.


Vendors cannot seamlessly integrate an AI solution into your health-care system because they don’t know:


1. The problem you wish to solve

2. The state of your data

3. The workflows that interact or run parallel to this problem

4. The people who need to say ‘yes’ in your organisation

5. The capacity of your existing IT infrastructure

And last but definitely not least, they absolutely do not know:

6. How their solution will perform in your novel environment once 1-5 has been defined.


The reality, at least at this moment, is that broad AI is not being used for broad purposes in Health.

We don’t have LLMs chirping away on ward rounds, educating all and suggesting clever investigations with a Hugh Laurie-esque wink.


We don’t have all our x-rays being reported by software with no human oversight.

What we are seeing is Narrow AI being used for specific use cases, typically under significant scrutiny, evaluation and usually some form of ethics-approved research study.


And this is good.

Because we need to build trust.


However, performing implementation is a team sport.

At a minimum you need:

• Clinician(s) who can understand the context in which the AI is being introduced • Clinician(s) with academic capabilities • Data Managers • Project managers

• Legal

• IT



And depending on the size and risk of the project, you’re also probably going to need:


• Third party contractors in the form of software developers

• Post market surveillance plans

• Evidence generation to make a case for ongoing procurement;


This very topic is raising new considerations such as health economics, PPIE work and investigation of acceptability.


This is quite the feat, even within a large teaching hospital with the personnel already on-site. It remains to be seen whether AI solutions that clear these numerous hurdles can be generalised into diverse and often less-resourced smaller hospitals with any degree of success.

Instead, it may look a lot like trying to parallel park a McLaren.



Written by Dr Sean Duncan MBBS PGCert MRCP



Sean is a Clinical Research Fellow at the University of Glasgow.

He is currently completing an MD studying the role of artificial intelligence in Lung Cancer diagnostics.

Sean is part of the Digital Health Validation Lab, a collaborative initiative focused on accelerating adoption of health technology into clinical settings through evidence generation and developer support.


Comments


Doctors That Code
© 2024 : Powered and secured by Wix

bottom of page