Please login to the form below

What are you doing, Dave?

Artificial Intelligence (AI) is invading all areas of healthcare. Is it our saviour or our nemesis?

In the sci-fi blockbuster 2001: A Space Odyssey, advanced technology is the catalyst for quantum leaps in human evolution, helping us to reach deep space and a higher level of consciousness. But the most disturbing scene from the film is when HAL, the computer that runs the spacecraft, decides that its human cargo is not just redundant, but endangering the mission. So HAL shuts down the life support to eliminate the bug in the system.

Artificial Intelligence (AI) is currently taking over large swathes of human activity, including healthcare. Because humans are not terribly good at decision-making in conditions of uncertainty, we are delegating more and more tasks to computers equipped with algorithms and machine learning. This means that unlike humans, they learn from their mistakes and their decisions get better over time.

NHS 24 in Scotland now has a symptom checker app. Babylon, the healthcare company that offers 'virtual GP' services, has created an NHS 111 app including a paediatric symptom checker for parents. Google's DeepMind is exploring the use of AI in managing head and neck cancer, acute kidney injury, and detection of eye disease. The IBM Watson supercomputer is also being used to help manage cancer, as well as analysing data on genes, binding proteins and pathways linked to the neurodegenerative disease ALS, to help develop new treatments.1

The drug company Merck is partnering with Amazon to investigate how the Alexa voice platform could assist diabetes patients. Alexa is already in use by Boston Children's Hospital, which has launched a KidsMD app to answers parents' queries about symptoms and drug dosing. The ability of AI to predict patient behaviour has been used by University College Hospital to identify which patients are likely to miss appointments, so they can target the right people with reminders.2

Despite initial reticence from the medical profession, AI is now used in diagnostics, in drug discovery, and with increasing confidence in clinical decision making. An AI system has been shown to make referral decisions on eye diseases with 94% accuracy, matching the performance of top ophthalmologists.3

Such is the healthcare industry's interest in AI, Ogilvy Health & Wellness is currently tracking at least 168 AI-focused startup companies. In fact Matt Hancock, the Health Secretary, sees technology as the saviour of the NHS. It's easy to see the attractions of AI: artificial intelligence has got to be better than natural stupidity. What could possibly go wrong?

Well, if you regard 2001 as a bit too science fiction to apply to AI as we know it, consider the facts emerging about the Boeing 737 crashes, which killed over 300 people. The inquest concluded that the plane's automated anti-stall software kept kicking in, and when the pilots tried to correct it, the computer overrode them and put the plane into a fatal nosedive.

For many years now, commercial aircraft have been designed to operate on auto-pilot. So although the human crew are fully trained to fly the plane, their main role is as machine minders, monitoring the instruments during the flight. Modern airplanes are autonomous vehicles in the same way as driverless cars. An aerospace expert commenting on the 737 crashes said that over-reliance on automated systems leads to "atrophy of vigilance", where human operators come to trust the technology so much that they let their attention drop.

The 737 tragedy was an extreme event – the consequences of AI failure are rarely so catastrophic – but it highlighted some of the limitations of AI.

AI is only as good as the data on which it is based. The mantra "garbage in, garbage out" applies to AI as much as any branch of computer science. And health data is often flawed and always incomplete.
AI can be biased and discriminatory. Algorithms are not objective – they are the sum of a programmer's beliefs and cognitive biases, enshrined in code. For instance, the facial recognition software that Amazon sold to police departments was found to have a racial bias, lacking the moral compass that allows people to interpret crime statistics.
AI can be hacked by parties with malicious intent. A computer virus that added fake tumours to medical scans was created by a team from Israel to show how easy it is to evade security protections for diagnostic equipment.4
AI systems possess knowledge but not understanding. Computers may be better than humans at making predictions by correlating data, but they lack understanding of cause and effect, and how the components in a system affect one another. This can lead to poor decision making. For example, observations of hormone use in perimenopausal women led to the prediction that HRT would reduce CHD. But the Women's Health Initiative discovered that hormone supplementation had the opposite effect, due to factors unknown.

All of this means that smart machines can sometimes make dumb decisions.

When Arthur C. Clarke wrote 2001, he believed that intelligent computers would transform our capabilities, solving problems that are impervious to human reason. Alexa, halt climate change. Hey Siri, find a solution to Brexit. He might be disappointed at the banal tasks to which most AI is being put: labour-saving digital assistants that allow people to turn up the central heating and order another pizza from Deliveroo without moving their lardy diabetic posteriors off the sofa. While in the smart kitchen, the internet-connected refrigerator detects that the oven chips are running low, so it orders another batch from Ocado. Are voice assistants and sentient fridges the pinnacle of human achievement? Or just another nail in our plus-sized coffins?

When people as smart as Stephen Hawking and Elon Musk warn that AI could spell the end of our species, we have to ask ourselves whether AI is an opportunity to augment our potential, or whether we are engineering our own demise. Healthcare is one area where AI has the ability to save lives and improve care. But because it involves life and death decisions, the cost of failure is high, so we must apply it wisely.

Most doctors excel at complex problem-solving, but they are less good at time-consuming tasks like explaining things to patients again and again. Alder Hey Children's hospital is therefore using AI as a digital assistant to replace repetitive, user-intense tasks and jobs that are emotionally draining, like reassuring patients.5

At the same time, work is under way to improve AI so it acquires understanding as well as knowledge. That will be an irreversible step, as it will endow AI with human-like characteristics. But it may then free human beings to develop other capabilities.

Last year the BMJ ran a debate on whether AI would make doctors obsolete.6 In China, a robot has already passed the national medical exam, so the future doesn't look promising for Homo sapiens. It was argued that AI lacks the empathy and emotional intelligence of human beings, who can build relationships, gain trust, and pick up nuances that a computer would miss. The counter-argument was that many patients don't want empathy, they just want a competent doctor who will make an accurate diagnosis and give them the treatment they want. And empathy can be programmed in, as demonstrated by robots who are helping to care for elderly people in Japan.

Writing in the BMJ, Dr. Margaret McCartney said: "AI has great potential in healthcare. But this potential will not be realised, and harm may be caused, if we don't accept the need for robust testing before it's publicly launched and widely used. We have no clear regulator, no clear trial process, and no clear accountability trail. What could possibly go wrong?"7

There is intense global debate about the future role of AI, addressing not just its practical application but its moral and philosophical implications, even prompting us to examine what it is to be human. However, AI is expanding at such a rate that, like the internet and social media, the consequences may be felt before we have a chance to exert control. In the rush to make life easier, we are overlooking the risk that it will make life as we know it obsolete.

That's why it is vital to apply human oversight to AI projects, and not let our vigilance atrophy. Before implementing any AI program in healthcare, we should ask whether it improves what we do already, and ensure it adheres to the Hippocratic principle: First, do no harm. In addition, we should ask the ethical question: Cui bono? Who benefits from this technology? If it is the end users – patients and doctors – all well and good. If only the owners of the technology benefit, by cutting costs or harvesting data, we should consider pulling the plug.

As HAL's circuits are slowly disconnected in the film 2001, his creepy, disembodied voice asks: "Just what do you think you're doing, Dave?" In Clarke's optimistic dystopia, machine intelligence is the spur to human progress, but humans only reach their full potential when they have their hands on the controls.

2. BMJ 2019; 365 doi:
3. BMJ 2018; 362 doi:
5. BMJ 2018; 362 doi:
6. BMJ 2018; 363 doi:
7. BMJ 2018; 361 doi:

©2019 Life Healthcare Communications

14th June 2019



Company Details

Life Healthcare Communications

+44 (0)1344 899050

Contact Website

Life Healthcare Communications
The Hernes Oak
North Street

Latest content on this profile

Curb your enthusiasm (if you want to impress)
How do you build trust in pharmaceutical marketing? Through what you say, not by how loud you shout.
Life Healthcare Communications
What are you doing, Dave?
Artificial Intelligence (AI) is invading all areas of healthcare. Is it our saviour or our nemesis?
Life Healthcare Communications
Fake News is Bad for your Health
Ill informed patients make poor health decisions
Life Healthcare Communications
Secrets of Pharma Advertising
If the goal of branding is to be different, why do so many Pharma brand ads look the same?
Life Healthcare Communications
Patients as partners: The future of clinical trial design?
'Patient-centric' is a term that appears in almost all clinical trial communications - but what does it mean and how do we achieve it?
Life Healthcare Communications
Snowflake advertising in a burnt-out world
How do you appeal to people who are too tired to care?
Life Healthcare Communications