Pavilion Health Today
Supporting healthcare professionals to deliver the best patient care

The downside of artificial intelligence for healthcare professionals

Artificial intelligence presents some huge opportunities for the healthcare profession, but what about the downside of this emerging technology.

Unless you have been cut off from civilisation with no access to any news or social media then I am fairly sure you will be aware of the latest high profile technology breakthrough, namely artificial intelligence (AI).

It has ignited many tech company’s share price whilst other companies from a wide range of backgrounds have scrambled to get in on the act. It has been covered by me in previous editions of this blog, namely “ChatGPT: what will it mean for you?”  and “Practical AI tools to make life easier”.

There is a huge upside to this emerging and exciting technology, but as healthcare professionals know only too well from their own field, every breakthrough in medicine carries a potential downside. Sometimes it is obvious from the word go or a problem can emerge some years down the line. AI is no exception to this issue and although it is easy to get carried away with its undeniable benefits, we also must look and consider the downsides. In the same way we would analyse a new drug or any other medical intervention, looking at its pros and cons.

Artificial intelligence and hallucinations

Healthcare professionals often like a case history, so let’s start with one that involves a professional person, a New York based lawyer. This lawyer hugely embarrassed himself when he used ChatGPT (a well-known AI engine covered in my previous articles) to generate legal cases and text which allegedly happened in the past to justify his lawsuit on behalf of his client in a Manhattan court.

I used the word allegedly in the previous paragraph because unfortunately for the lawyer and his client, these legal precedents and the legal brief produced by ChatGPT were completely false and the legal cases cited-nonexistent. This mess was discovered by the opposing lawyers when they could not find the information about the cases that were cited, in conventional legal databases.

The humiliated lawyer claimed that he thought ChatGPT was a sophisticated search engine and clearly had no understanding of how ChatGPT works (as do many people). As a result, not only was the lawsuit dismissed and the humiliated lawyer lost his case but he was sanctioned.

Incidentally, the word humiliated was not produced by me (or ChatGPT!) but that is how the lawyer in question described his own personal experience. Clearly, this high profile case serves as a reminder, especially for professionals like healthcare workers, that although AI engines are good, they are far from perfect.

A confident prediction by an AI engine such as ChatGPT that turns out to be wrong is called a Hallucination. This has also been discussed in a recent editorial published in the British Medical Journal (BMJ).

So, what does this legal case in New York tell us? Well as healthcare professionals, we know that our factual knowledge should be evidence based and ideally be confirmed from multiple sources. We would be horrified if we were caught out in the same way that the lawyer was caught out; by his own admission, he was “humiliated.” We need to be careful where we source our information from, and it should be checked via a variety of trusted resources.

ChatGPT and healthcare

However, that does not mean we should dismiss AI resources such as ChatGPT. What the humiliated lawyer should have done is double check what ChatGPT has produced by current conventional means. In fact, the Economist magazine also covered this lawyer story and stated that a substantial number of legal tasks can in fact be carried by AI. In turn, this could make legal services cheaper and improve access to a wider audience.

The important lesson here is how you use this specific technology and understand its limitations. ChatGPT is not a search engine like Google but has sophisticated skills which can be harnessed by the legal and healthcare professions whilst we make strenuous efforts to eliminate the “Hallucinations” and double check the conclusions and advice from sources such as ChatGPT.

A significant issue which affects the healthcare aspects of AI is data security. AI engines like ChatGPT require access to large data sets for learning and of course bear in mind electronic clinical records about patients are held in secure databases. These databases allow access to a privileged few people on a need-to-know basis to maintain patient confidentiality.

If AI machines were allowed access to such large patient databases (which would provide very good learning for AI engines) for learning purposes then how would the patient themselves provide informed consent to such access of these AI engines? This would be a significant issue and of course if AI algorithms were allowed access to such databases could this open the door to hackers?

Spreading misinformation

Another worrying feature about AI generated content is how it can produce articles either published in print or on the web which are factually wrong and contain rubbish. In fact, they can be viewed as a modern version of spam email.

This is more than producing rubbish information, this fake information produced by AI engines can be repurposed to produce political disinformation, spreading discontent and disharmony, promote hacking and even supporting criminal activity such as fraud. Equally, AI engines trawling the internet looking for sources of information to digest and learn from could be at risk of breaching copyright of the data sets.

Bear in mind, datasets like ChatGPT learn by digesting large amounts of content and data on the internet. If it were to swallow its own or other AI generated rubbish, it would produce even more rubbish, negating it’s potential benefits. This whole issue of erroneous contents was recently covered (13th July 2023) in a Wall Street Journal article.

There are potential solutions to both copyright issues and accessing high quality databases. Respected electronic textbooks and journals could “feed” AI engines and enhance their learning, improve their knowledge and improve their accuracy. However, the owners of the AI engines would have to pay the copyright holders for the use of their work or come to a mutually agreeable arrangement. There is now a demand from the AI industry for trusted and reliable data sources which is likely to cost money and in turn that eventually could be passed onto the consumer. This is not the only cost for AI providers, as they require a lot of computer power and expensive semiconductors which again has to be fed through to the end user.

Bearing all this in mind, where does this leave the medical and healthcare professions. With ever spiraling costs and lack of human resources such as doctors and nurses, are AI Chatbots the answer? In case you don’t know, as the name suggests a chatbot in this case powered by AI can have an automated text “conversation” with a person generating text responses.

It is like having a dialogue with answers produced by a computer but customised to the person asking the question. The cost effectiveness of automating a response to a patient question is clearly attractive from an economic point of view. But of course, a society downside is that this can lead to a loss of jobs which in turn can be compensated by the creation of AI jobs and replacement of hard to find and recruit healthcare professionals.

However, we are interested specifically from a clinical point of view and it may not surprise that big tech companies are looking at this field. Google Research has produced Med-PaLM, which is designed to answer health related questions.

Clearly, everyone is aware how important it is to be accurate when an automated service is providing answers to clinical queries which could help both the medical and healthcare professions and the general public. The reputational damage from a malfunctioning and error prone chatbot to a company in getting this clinical information wrong, never mind the potential litigation from end users, is enormous.

So no decent company is going to rush into this but there is a lot of scientific curiosity about how or if, we can harness this type of technology. Scientific articles have already been published in reputable journals.

It may take a while to produce a reliable AI chatbot which can answer with confidence health related queries but I am sure it will happen one day, probably not soon but sometime in the near future. However, it is likely that there will be have to be some human oversight and light legally backed regulation as no computer will be 100% reliable in healthcare information.

Yet, the same can be said for humans as human healthcare professionals will also make mistakes and errors of judgement. So healthcare professionals working in tandem with AI could be a productive collaboration. In the meantime, it is important to be acquainted with this type of technology as the next breakthrough in this field may not be that far away. But it is so important to be aware of the limitations of current AI technology. There is a lawyer in New York who wished he knew that.


Dr Harry Brown is a retired GP, Leeds, and medical editor of Pavilion Health Today


 

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read more ...

Privacy & Cookies Policy