Humans less likely to accept euthanasia decisions by AI, study finds

The study found that people are generally less accepting of euthanasia decisions made by AI or robots compared to those made by human doctors.

 An illustrative image of a medic using Artificial Intelligence.  (photo credit: INGIMAGE)
An illustrative image of a medic using Artificial Intelligence.
(photo credit: INGIMAGE)

People are less likely to accept decisions on Euthanasia from AI than they would from humans, a new study from Finland's University of Turku found. 

The study was based on subjects’ moral decisions based on judgements made by AI, robots, and humans when it came to Euthanasia treatment for patients in a coma. 

Published in the peer-reviewed journal Cognition, this study was conducted in Great Britain, the Czech Republic, and Finland.

Principal Investigator Michael Laakasuo explained that the concept of people deciding to hold some of the judgments of AI and robots to a higher standard is referred to as "the Human-Robot moral judgement asymmetry effect."

“However, it is still a scientific mystery in which decisions and situations the moral judgement asymmetry effect emerges. Our team studied various situational factors related to the emergence of this phenomenon and the acceptance of moral decisions," Laakasuo said.

 A doctor using the Chameleon system (illustrative) (credit: MAARIV)
A doctor using the Chameleon system (illustrative) (credit: MAARIV)

The study found that people are generally less accepting of euthanasia decisions made by AI or robots compared to those made by human doctors. 

Experiences with AI play an important role

This difference existed regardless of whether the AI was advising or actually making the decision. However, when the decision was to continue life support, there was no difference in acceptance between human and AI decision-makers. 

Acceptance levels also aligned when patients themselves requested euthanasia while awake. The researchers suggest that this moral judgment asymmetry is at least partly due to people perceiving AI as less competent than human doctors.

“AI's ability to explain and justify its decisions was seen as limited, which may help explain why people accept AI into clinical roles less.”

“The implications of this research are significant as the role of AI in our society and medical care expands every day. It is important to understand the experiences and reactions of ordinary people so that future systems can be perceived as morally acceptable,” Laakasuo concluded.