Heritage College of Osteopathic Medicine researcher tackles ethics of AI in health care
As the role of artificial intelligence tools in health care continues to grow, little ethical guidance exists for health care professionals when it comes to notifying patients of how and when such tools are being used, and when their consent is necessary. With no standard approach, hospital systems and clinicians run the risk of fostering mistrust with their patients and the public. In a recently published report, a Heritage College of Osteopathic Medicine researcher addresses this ethical concern with a proposed framework to help providers navigate patient notification and informed consent practices when using AI.
鈥淭he goal of this work is to provide a practical guide for identifying when informed consent or notification is necessary, or when we really don't need to alert anyone,鈥 said Devora Shapiro, Ph.D., associate professor of medical ethics at the Heritage College, Cleveland. 鈥淲e just want to make sure that facilities are actually taking the time to explain things that are relevant and necessary for patients to understand, so patients can make decisions in their best interest鈥hat was the motivation behind producing this article鈥攇uidance.鈥
Informed consent is the process in which a health care provider educates a patient about the risks, benefits and alternatives of a given procedure or intervention, especially those which are complex or high risk. This process empowers patients to make informed choices about their care, providing a sense of autonomy and building trust and communication pathways between the patient and their care provider, said Shapiro.
According to Shapiro patient consent is necessary when AI is used in high-risk procedures, but also in decisions where AI-assisted recommendations have a significant impact on patient outcomes or treatment progression. Currently no AI tools fully replace the role of a care provider; they are primarily supportive resources assisting with straightforward tasks like assigning hospital beds or more analytical work such as interpreting radiology tests.
Shapiro and lead author, Susannah Rose, Ph.D., associate professor of biomedical informatics at Vanderbilt University, propose five key criteria to determine when and how patients should be notified if AI is used in health care. They include how much independence AI is given to make decisions, the degree to which the AI model deviates from established medical practices, whether AI directly interacts with patients, the potential risk introduced to patient care and the practical challenges of implementing the notification and consent process.
The proposed framework also categorizes AI technologies into three levels and assigns a scoring system to determine the degree to which informed consent is needed.
鈥淲e aren鈥檛 suggesting we've answered all the questions, but we think this is a really solid starting point,鈥 Shapiro said. 鈥淲e also encourage people to continue having conversations about this and continue to address more concerns. That would be a wonderful thing.鈥
Rose and Shapiro鈥檚 framework was in May 2024, and is primarily geared toward hospital administrators for use in shaping facility-wide AI notification practices.