Ten steps to ethics-based governance of AI in healthcare
AI’s ability to interpret data relies on processes that are not transparent, making it difficult to verify and trust outputs from AI systems.
Artificial intelligence is transforming healthcare as we know it, enabling healthcare professionals to analyze health data quickly and precisely, and leading to better detection, treatment, and prevention of a multitude of physical and mental health issues. In addition, AI plays an increasingly significant role in the fields of medical research and education.
However, AI’s ability to interpret data relies on processes that are not transparent, making it difficult to verify and trust outputs from AI systems. The use of AI in healthcare raises ethical questions that must be considered to avoid potentially harming patients, creating liability for healthcare providers and undermining public trust in these technologies.
For example, healthcare AI tools have been observed to replicate racial, socioeconomic and gender bias. Even when algorithms are free of structural bias, data interpreted by algorithms may contain bias later replicated in clinical recommendations. Although algorithmic bias is not unique to predictive AI, AI tools are capable of amplifying these biases and compounding existing healthcare inequalities.
Patients are often unaware of the extent to which healthcare AI tools are capable of mining and drawing conclusions from health and non-health data, including from sources they believe to be confidential. Consequently, patients are not fully aware of how AI predictions can be used against them.
If AI predictions about health are included in a patient’s electronic record, anyone with access to that record could discriminate on the basis of speculative forecasts about mental health, cognitive decline risk, cancer risk or potential for opioid abuse. The implications for patient safety, privacy and engagement are profound.
More concerning, these risks have already outpaced the current legal landscape. The Health Insurance Portability and Accountability Act (HIPAA), which requires patient consent for disclosures of certain medical information, does not apply to commercial entities that are not healthcare providers or insurers.
The Americans with Disabilities Act (ADA) does not prohibit discrimination based on future medical problems, and no law prohibits decision-making on the basis of non-genetic predictive data. Traditional malpractice rules of physician liability are also becoming more complex to apply as doctors become increasingly reliant on AI tools.
As large healthcare systems increasingly adopt AI technologies, data governance structures must evolve to ensure that ethical principles are applied to all clinical, information technology, education, and research endeavors. A data governance framework based on the following 10 steps can assist large healthcare systems embrace AI applications in a way that reduces ethical risks to patients, providers and payers. Additionally, it enhances public trust, transforms patient experiences and provides effective ethics-based oversight.
1) Establish ethics-based governing principles. AI initiatives should align to key overarching principles to ensure these efforts are shaped and implemented in an ethical way. Minimally, these principles should affirm that:
2. Establish a Digital Ethics Steering Committee. Large healthcare systems should operationalize AI strategy through a Digital Ethics Steering Committee comprised of the Chief Data Officer, Chief Privacy Officer, Chief Information Officer, Chief Health Informatics Officer, Director of Artificial Intelligence, Chief Risk Officer and Chief Ethics Officer. These offices are best positioned to engage the intertwined issues of privacy, data, ethics and technology.
3. Convene diverse focus groups. Diverse focus groups representative of the stakeholders from whom data sets may be collected are critical to reducing algorithmic bias.
4. Subject algorithms to peer review. Rigorous peer review processes are essential to exposing and addressing blind spots and weaknesses in AI models. In particular, AI tools that will be applied to data involving race or gender should be peer reviewed and validated to avoid compounding bias.
5. Conduct AI model simulations. Simulation models that test scenarios in which AI tools are susceptible to bias will mitigate risk and improve confidence.
6. Develop clinician-focused guidance for interpreting AI results. Physicians must be equipped to give appropriate weight to AI tools in a way that preserves their clinical judgement. AI technologies should be designed and implemented in a way that augments, rather than replaces, professional clinical judgement.
7. Develop external change communication and training strategies. As applications of AI in healthcare evolve, a strategic messaging strategy is important to ensuring the key benefits and risks of healthcare AI will be understood by patients. A robust training plan must also explore ethical and clinical nuances that arise among patients, caregivers, researchers and AI systems.
8. Maintain a record log of tests. A repository of test plans, methodologies and results will facilitate identification of positive and negative trends in AI technologies. This repository can be leveraged to improve comprehension of and confidence in AI outputs.
9. Test algorithms in blind randomized controlled experiments. Rigorous testing, complemented by randomized controlled experiments, is an effective method of isolating and reducing or eliminating sources of bias.
10. Continuously monitor algorithmic decision processes. AI, by its nature, is always evolving; consequently, algorithmic decision processes must be monitored, assessed and refined continuously.
However, AI’s ability to interpret data relies on processes that are not transparent, making it difficult to verify and trust outputs from AI systems. The use of AI in healthcare raises ethical questions that must be considered to avoid potentially harming patients, creating liability for healthcare providers and undermining public trust in these technologies.
For example, healthcare AI tools have been observed to replicate racial, socioeconomic and gender bias. Even when algorithms are free of structural bias, data interpreted by algorithms may contain bias later replicated in clinical recommendations. Although algorithmic bias is not unique to predictive AI, AI tools are capable of amplifying these biases and compounding existing healthcare inequalities.
Patients are often unaware of the extent to which healthcare AI tools are capable of mining and drawing conclusions from health and non-health data, including from sources they believe to be confidential. Consequently, patients are not fully aware of how AI predictions can be used against them.
If AI predictions about health are included in a patient’s electronic record, anyone with access to that record could discriminate on the basis of speculative forecasts about mental health, cognitive decline risk, cancer risk or potential for opioid abuse. The implications for patient safety, privacy and engagement are profound.
More concerning, these risks have already outpaced the current legal landscape. The Health Insurance Portability and Accountability Act (HIPAA), which requires patient consent for disclosures of certain medical information, does not apply to commercial entities that are not healthcare providers or insurers.
The Americans with Disabilities Act (ADA) does not prohibit discrimination based on future medical problems, and no law prohibits decision-making on the basis of non-genetic predictive data. Traditional malpractice rules of physician liability are also becoming more complex to apply as doctors become increasingly reliant on AI tools.
As large healthcare systems increasingly adopt AI technologies, data governance structures must evolve to ensure that ethical principles are applied to all clinical, information technology, education, and research endeavors. A data governance framework based on the following 10 steps can assist large healthcare systems embrace AI applications in a way that reduces ethical risks to patients, providers and payers. Additionally, it enhances public trust, transforms patient experiences and provides effective ethics-based oversight.
1) Establish ethics-based governing principles. AI initiatives should align to key overarching principles to ensure these efforts are shaped and implemented in an ethical way. Minimally, these principles should affirm that:
- Do no harm: Human beings should exercise reasonable judgement and maintain responsibility for the life cycle of AI algorithms and systems, and healthcare outcomes stemming from those AI algorithms and systems.
- AI tools should be designed and developed using transparent protocols, auditable methodologies and metadata.
- AI systems should collect and treat patient data to reduce biases against population groups.
- Patients should be appraised of the known risks and benefits of AI technologies to make informed medical decisions.
2. Establish a Digital Ethics Steering Committee. Large healthcare systems should operationalize AI strategy through a Digital Ethics Steering Committee comprised of the Chief Data Officer, Chief Privacy Officer, Chief Information Officer, Chief Health Informatics Officer, Director of Artificial Intelligence, Chief Risk Officer and Chief Ethics Officer. These offices are best positioned to engage the intertwined issues of privacy, data, ethics and technology.
3. Convene diverse focus groups. Diverse focus groups representative of the stakeholders from whom data sets may be collected are critical to reducing algorithmic bias.
4. Subject algorithms to peer review. Rigorous peer review processes are essential to exposing and addressing blind spots and weaknesses in AI models. In particular, AI tools that will be applied to data involving race or gender should be peer reviewed and validated to avoid compounding bias.
5. Conduct AI model simulations. Simulation models that test scenarios in which AI tools are susceptible to bias will mitigate risk and improve confidence.
6. Develop clinician-focused guidance for interpreting AI results. Physicians must be equipped to give appropriate weight to AI tools in a way that preserves their clinical judgement. AI technologies should be designed and implemented in a way that augments, rather than replaces, professional clinical judgement.
7. Develop external change communication and training strategies. As applications of AI in healthcare evolve, a strategic messaging strategy is important to ensuring the key benefits and risks of healthcare AI will be understood by patients. A robust training plan must also explore ethical and clinical nuances that arise among patients, caregivers, researchers and AI systems.
8. Maintain a record log of tests. A repository of test plans, methodologies and results will facilitate identification of positive and negative trends in AI technologies. This repository can be leveraged to improve comprehension of and confidence in AI outputs.
9. Test algorithms in blind randomized controlled experiments. Rigorous testing, complemented by randomized controlled experiments, is an effective method of isolating and reducing or eliminating sources of bias.
10. Continuously monitor algorithmic decision processes. AI, by its nature, is always evolving; consequently, algorithmic decision processes must be monitored, assessed and refined continuously.
More for you
Loading data for hdm_tax_topic #better-outcomes...