Healthcare AI and the Push for TransparencyBy: Caregility Team
Efforts to bring regulatory oversight and transparency to healthcare AI received a push from notable advocacy groups recently.
The Coalition for Health AI
In October 2022, members of the Coalition for Health AI (CHAI) convened to finalize regulatory framework recommendations on the responsible use of artificial intelligence. ONC recently joined the FDA, NIH, and the White House Office of Science and Technology Policy (OSTP) as federal observers of the coalition, which counts Johns Hopkins University, Mayo Clinic, Google, and Microsoft among its members.
CHAI announced plans to share its recommendations, culled from healthcare stakeholder workshops and public feedback, by the end of the year. The organization’s goal is to identify priority areas where guidance is needed to ensure equity in healthcare AI research, technology, and policy. Healthcare IT News reports that CHAI researchers are also developing an online curriculum to support standards-based training on AI development, support, and maintenance.
The White House OSTP
CHIA’s news came on the heels of the White House OSTP introducing its broader Blueprint for an AI Bill of Rights. The Blueprint identifies five guidelines for the design, use, and deployment of automated systems that seek to protect Americans, including:
- Safe and effective systems – Diverse stakeholder and expert feedback; testing and risk mitigation; evaluation and reporting
- Algorithmic discrimination protections – Proactive equity assessment during design; representative data; disparity testing
- Data privacy – Patient agency over how data is used; data is secure and only used for necessary functions
- Notice and explanation – Patient notification of AI and how and why it contributes to outcomes
- Human alternatives, consideration, and fallback – Patient opportunity to opt out; human alternatives if system fails or patient opts out
The framework applies to automated systems that “have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services,” including healthcare.
Clinical Decision Support (CDS) software guidance issued by the FDA in late September 2022 includes more explicit recommendations related to the use of AI in healthcare. The FDA recommends that CDS Software-as-a-Medical-Device (SaMD) solutions provide plain language descriptions of underlying algorithms, data sets, and research validation methods, including:
- A summary of the logic and methods used to provide clinical recommendations (e.g., meta-analysis of clinical studies, expert panel, statistical modeling, AI/ML techniques)
- A description of data sources used so providers can assess if data is representative of patient populations
- A description of the results from clinical studies conducted to validate the algorithm and recommendations so providers can assess potential performance and limitations (such as missing patient data or highly variable algorithm performance among sub-populations)
Examples of applicable software functions this would impact can be found here.
Establishing Trust in Healthcare AI
Each of these initiatives seeks to contribute to a more comprehensive regulatory framework for healthcare AI and offers a glimpse into what is likely ahead for the flourishing – and currently largely unregulated – field.
These tools hold tremendous potential clinically (disease prediction) and operationally (process automation). From enterprise imaging workflow support to advanced video analysis for patient fall detection, healthcare providers are eager to leverage AI to drive efficiency in care delivery. The concern is that innovation is outpacing regulatory oversight, which can pose risks to patients if left unchecked.
One key concern is that bias in underlying algorithms can lead to health inequity. What if underlying data sets do not appropriately represent all patient populations? Traditional “black box AI” yields little insight into the criteria used to build models. That lack of code access limits replicability in scientific testing used to validate models.
Increasingly the market is calling for transparency in healthcare AI algorithms, akin to what’s seen in clinical drug trials. The kind of “explained AI” that CHAI, the White House, and the FDA are championing would pave the way for new regulatory frameworks that foster greater trust and accountability.
“Existing models of regulation are designed for ‘locked’ healthcare solutions, whereas AI is flexible and evolves over time,” notes EY GSA Life Sciences Law Leader Heinz-Uwe Dettling. “Devices may need reauthorization if the AI continues to develop in a way that deviates from the manner predicted by the manufacturer.”
And what about AI not tied to CDS, which does not currently fall under the FDA’s purview? Clearer guidance is needed on the different types of healthcare AI and how each will be regulated to ensure that guardrails for appropriate use are in place going forward. The coming years will undoubtedly see friction between AI innovation and regulation. As a broader regulatory framework materializes, those who embrace algorithm transparency could benefit from proactively leading the charge to build trust between solution providers, clinical teams, and patients.