William Webster

written by

William Webster

Researcher, Avoncourt Partners GmbH

Culture Blog - Jun 2, 2018

Ethical issues surrounding AI in healthcare

I remember when chat rooms and emails began forging a new way of communication – sudden, anonymous, utterly frank, and lacking the need for personal responsibility. It was a chance for people across the globe to enrich each other with their personal and cultural experiences. But it was also a challenge for parents fearing their teens were chatting with strangers who had perverted intentions, or for phishers seeking to steal credit card information with false sales. As that old “new wave” of internet technology dawned, so also did a thousand questions about moral behavior and the ethics of the new thing.

Do what is right, not what easy

Not much different is the dawning of the age of artificial intelligence, and particularly in the realm of healthcare. There is much hope and excitement surrounding the use of AI in healthcare. It has the potential to make caring for the sick and needy more efficient and patient-friendly. It seems it could speed up and reduce errors in diagnoses. Patients could be helped to better manage symptoms or cope with chronic illnesses. AI could help avoid human bias and error.

Yet some important questions must be considered: who is responsible for the decisions made by AI systems? Will the increasing use of AI lead to a loss of human interaction in healthcare? And what happens if AI systems are hacked?

Dangerous Hooded Hacker Breaks

AI can make erroneous decisions. Is the programmer or the owner of the machine responsible when AI is used to support decision-making? It seems that neither parties deserve full liability. AI machines with deep-learning capabilities give little information to discover and validate the outputs of their systems. We hardly understand how they come to some of their conclusions, be they correct or incorrect analyses.

AI systems must be trained by a programmer or programming team. There exists a risk of inherent bias in the data used to train AI systems, as well as the risk of loss of security and privacy of potentially sensitive data. What if a machine were to share the data with other machines, and this data were accessed or hacked by malevolent persons?

Despite the hype surrounding AI in healthcare, there is an urgent need to secure public trust in the development and use of AI technology if it wants to serve the public’s interests. The potential applications of AI in healthcare are still being explored through a number of promising initiatives across different sectors, among them healthcare organizations and governments. The challenge will be to ensure that innovation in AI is developed and used in a ways that are transparent, address social needs, and that are consistent with moral values accepted by the general public.