Ethics Framework For Generative AI Use in Healthcare
Author: Digital Health Cooperative Research Centre
Published: 2023/05/16 - Updated: 2024/05/12
Publication Type: Research, Study, Analysis - Peer-Reviewed: Yes
Topic: AI and Disabilities (Publications Database)
Page Content: Synopsis Definition Introduction Main Item
Synopsis: New paper introduces ethics framework for the responsible use, design, and governance of Generative AI applications in healthcare and medicine.
• Large Language Models (LLMs) have the potential to fundamentally transform information management, education, and communication workflows in healthcare and medicine but equally remain one of the most dangerous and misunderstood types of AI.
• This study is a plea for regulation of generative AI technology in healthcare and medicine and provides technical and governance guidance to all stakeholders of the digital health ecosystem: developers, users, and regulators.
Introduction
"Attention Is Not All You Need: The Complicated Case of Ethically Using Large Language Models in Healthcare and Medicine" - EBioMedicine.
A new paper published by leading Australian AI ethicist Stefan Harrer PhD proposes for the first time a comprehensive ethical framework for the responsible use, design, and governance of Generative AI applications in healthcare and medicine.
Main Item
The peer-reviewed study published in The Lancet's eBioMedicine journal details how Large Language Models (LLMs) have the potential to fundamentally transform information management, education, and communication workflows in healthcare and medicine but equally remain one of the most dangerous and misunderstood types of AI.
"LLMs used to be boring and safe. They have become exciting and dangerous," said Dr Harrer who is also the Chief Innovation Officer of major Australian funder of digital health research and development, the Digital Health Cooperative Research Centre (DHCRC) and a member of the Coalition for Health AI (CHAI).
"This study is a plea for regulation of generative AI technology in healthcare and medicine and provides technical and governance guidance to all stakeholders of the digital health ecosystem: developers, users, and regulators. Because generative AI should be both exciting and safe."
LLMs are a key component of generative AI applications for creating new content including text, imagery, audio, code, and videos in response to textual instructions. Prominent examples scrutinized in the study against ethical design, release and use principles and performance include OpenAI's chatbot ChatGPT, Google's chatbot Med-PALM, Stability AI's imagery generator Stable Diffusion, and Microsoft's BioGPT bot.
The study highlights and explains many key applications for healthcare:
- Assisting clinicians with the generation of medical reports or preauthorization letters.
- Helping medical students to study more efficiently.
- Simplifying medical jargon in clinician-patient communication.
- Increasing the efficiency of clinical trial design.
- Helping to overcome interoperability and standardization hurdles in EHR mining.
- Making drug discovery and design processes more efficient.
However, the paper also highlights that the inherent danger of LLM-driven generative AI arising from the ability of LLMs to authoritatively and convincingly produce and disseminate false, inappropriate, and dangerous content at unprecedented scale is increasingly being marginalized in an ongoing hype around the recently released latest generation of powerful LLM chatbots.
A Framework for Mitigating Risks of AI in Healthcare
As part of the study, Dr Harrer identified a comprehensive set of risk factors which are of special relevance to using LLM technology as part of generative AI systems in health and medicine, and proposes risk mitigation pathways for each of them. The study highlights and analyses real life use cases of both, ethical and unethical development of LLM technology.
"Good actors chose to follow an ethical path to building safe generative AI applications. Bad actors, however, are getting away with doing the opposite: hastily productizing and releasing LLM-powered generative AI tools into a fast-growing commercial market they gamble with the well-being of users and the integrity of AI and knowledge databases at scale. This dynamic needs to change," said Dr Harrer.
Dr Harrer argues that the limitations of LLMs are systemic and rooted in their lack of language comprehension:
"The essence of efficient knowledge retrieval is to ask the right questions, and the art of critical thinking rests on one's ability to probe responses by assessing their validity against models of the world. LLMs can perform none of these tasks. They are in-betweeners which can narrow down the vastness of all possible responses to a prompt to the most likely ones but are unable to assess whether prompt or response made sense or were contextually appropriate," Dr Harrer said.
Therefore, he suggests that boosting training data sizes and building ever more complex LLMs will not mitigate risks but rather amplify them. The study proposes alternative approaches to ethically (re-) designing generative AI applications, to shaping regulatory frameworks, and to directing technical research efforts towards exploring methods for implementation and enforcement of ethical design and use principles.
Dr Harrer proposes a regulatory framework with 10 principles for mitigating the risks of generative AI in health:
- Design AI as an assistive tool for augmenting the capabilities of human decision makers, not for replacing them.
- Design AI to produce performance, usage and impact metrics explaining when and how AI is used to assist decision making and scan for potential bias.
- Study the value systems of target user groups and design AI to adhere to them.
- Declare the purpose of designing and using AI at the outset of any conceptual or development work.
- Disclose all training data sources and data features.
- Design AI systems to clearly and transparently label any AI-generated content as such.
- Ongoingly audit AI against data privacy, safety, and performance standards.
- Maintain databases for documenting and sharing the results of AI audits, educate users about model capabilities, limitations and risks, and improve performance and trustworthiness of AI systems by retraining and redeploying updated algorithms.
- Apply fair-work and safe-work standards when employing human developers.
- Establish legal precedence to define under which circumstances data may be used for training AI, and establish copyright, liability and accountability frameworks for governing the legal dependencies of training data, AI-generated content, and the impact of decisions humans make using such data.
"Without human oversight, guidance and responsible design and operation, LLM-powered generative AI applications will remain a party trick with substantial potential for creating and spreading misinformation or harmful and inaccurate content at unprecedented scale," said Dr Harrer.
He predicts that the field will move from the current competitive LLM arms race to a phase of more nuanced and risk-conscious experimentation with research-grade generative AI applications in health, medicine and biotech which will deliver first commercial product offerings for niche applications in digital health data management within the next 2 years.
"I am inspired by thinking about the transformative role generative AI and LLMs could one day play in healthcare and medicine, But I am also acutely aware that we are by no means there yet and that, despite the prevailing hype, LLM-powered generative AI may only gain the trust and endorsement of clinicians and patients if the research and development community aims for equal levels of ethical and technical integrity as it progresses this transformative technology to market maturity."
Comments on this Research
"The DHCRC has a critical role in translating ethical AI into practice," said DHCRC CEO Annette Schmiede. "There is a newfound enthusiasm for the role of generative AI in transforming healthcare and we are at a tipping point where AI will start to become ever more integrated into the digital health ecosystem. We are on the frontline and frameworks like the one outlined in this paper will become critical to ensure an ethical and safe use of AI."
"Ethical AI requires a lifecycle approach from data curation to model testing, to ongoing monitoring. Only with the right guidelines and guardrails can we ensure our patients benefit from emerging technologies while minimizing bias and unintended consequences," said John Halamka, M.D., M.S, President of Mayo Clinic Platform and a co-founder of CHAI.
"This study provides important ethical and technical guidance to users, developers, providers, and regulators of generative AI and incentivizes them to responsibly and collectively prepare for the transformational role this technology could play in health and medicine," said Brian Anderson, M.D., Chief Digital Health Physician at MITRE.
Attribution/Source(s):
This peer reviewed publication was selected for publishing by the editors of Disabled World due to its significant relevance to the disability community. Originally authored by Digital Health Cooperative Research Centre, and published on 2023/05/16 (Edit Update: 2024/05/12), the content may have been edited for style, clarity, or brevity. For further details or clarifications, Digital Health Cooperative Research Centre can be contacted at digitalhealthcrc.com. NOTE: Disabled World does not provide any warranties or endorsements related to this article.
1 - Health Care Algorithms in Racial and Ethnic Disparities - Study points to ways to reduce potential for racial bias and inequity when using algorithms to inform clinical care.
2 - Deep Learning AI Finds New Class of Antibiotic Candidates - The newly discovered compounds can kill methicillin-resistant Staphylococcus aureus (MRSA), a bacterium that causes deadly infections.
3 - Investigation Examines Capacity of AI to Sustain Racial and Gender Biases Within Clinical Decision Making - Researchers analyzed GPT-4 performance in clinical decision support scenarios: generating clinical vignettes, diagnostic reasoning, clinical plan generation and subjective patient assessments.
Page Information, Citing and Disclaimer
Disabled World is a comprehensive online resource that provides information and news related to disabilities, assistive technologies, and accessibility issues. Founded in 2004 our website covers a wide range of topics, including disability rights, healthcare, education, employment, and independent living, with the goal of supporting the disability community and their families.
Cite This Page (APA): Digital Health Cooperative Research Centre. (2023, May 16 - Last revised: 2024, May 12). Ethics Framework For Generative AI Use in Healthcare. Disabled World. Retrieved December 10, 2024 from www.disabled-world.com/assistivedevices/ai/llm-ai.php
Permalink: <a href="https://www.disabled-world.com/assistivedevices/ai/llm-ai.php">Ethics Framework For Generative AI Use in Healthcare</a>: New paper introduces ethics framework for the responsible use, design, and governance of Generative AI applications in healthcare and medicine.
While we strive to provide accurate and up-to-date information, it's important to note that our content is for general informational purposes only. We always recommend consulting qualified healthcare professionals for personalized medical advice. Any 3rd party offering or advertising does not constitute an endorsement.