Skip to Main Content
Schedule a Demo Contact Us

Healthcare AI and the Push for Transparency


Efforts to bring regulatory oversight and transparency to healthcare AI received a push from notable advocacy groups recently.


The Coalition for Health AI

In October 2022, members of the Coalition for Health AI (CHAI) convened to finalize regulatory framework recommendations on the responsible use of artificial intelligence. ONC recently joined the FDA, NIH, and the White House Office of Science and Technology Policy (OSTP) as federal observers of the coalition, which counts Johns Hopkins University, Mayo Clinic, Google, and Microsoft among its members.

CHAI announced plans to share its recommendations, culled from healthcare stakeholder workshops and public feedback, by the end of the year. The organization aims to identify priority areas that require guidance to ensure equity in healthcare AI research, technology, and policy. Healthcare IT News reports that CHAI researchers are also developing an online curriculum to support standards-based training on AI development, support, and maintenance.


The White House OSTP

CHIA’s news came on the heels of the White House OSTP introducing its broader Blueprint for an AI Bill of Rights. The Blueprint identifies five guidelines for the design, use, and deployment of automated systems that seek to protect Americans, including:

  1. Safe and effective systems – Diverse stakeholder and expert feedback; testing and risk mitigation; evaluation and reporting
  2. Algorithmic discrimination protections – Proactive equity assessment during design; representative data; disparity testing
  3. Data privacy – Patient agency over how data is used; data is secure and only used for necessary functions
  4. Notice and explanation – Patient notification of AI and how and why it contributes to outcomes
  5. Human alternatives, consideration, and fallback – Patient opportunity to opt out; human alternatives if system fails or patient opts out

The framework applies to automated systems that “have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services,” including healthcare.


The FDA

Clinical Decision Support (CDS) software guidance issued by the FDA in late September 2022 includes more explicit recommendations related to the use of AI in healthcare. The FDA recommends that CDS Software-as-a-Medical-Device (SaMD) solutions provide plain language descriptions of underlying algorithms, data sets, and research validation methods, including:

  • A summary of the logic and methods used to provide clinical recommendations (e.g., meta-analysis of clinical studies, expert panel, statistical modeling, AI/ML techniques)
  • A description of data sources used so providers can assess if data is representative of patient populations
  • A description of the results from clinical studies conducted to validate the algorithm and recommendations so providers can assess potential performance and limitations (such as missing patient data or highly variable algorithm performance among sub-populations)

You can find a list of software functions this would impact here.


Establishing Trust in Healthcare AI

Each of these initiatives seeks to contribute to a more comprehensive regulatory framework for healthcare AI and offers a glimpse into what is likely ahead for the flourishing – and currently largely unregulated – field.

These tools hold tremendous potential clinically (i.e., disease prediction) and operationally (i.e., process automation). From enterprise imaging workflow support to advanced video analysis for patient fall detection, providers are eager to leverage AI to drive efficiency in care delivery. There is, however, growing awareness of the potential for bias in underlying algorithms, which can lead to health inequity.

Stakeholders are calling for transparency in healthcare AI algorithms, and rightly so. The kind of “explained AI” that CHAI, the White House, and the FDA are championing would pave the way for new regulatory frameworks that foster trust for clinicians and patients and accountability for vendors.

“Existing models of regulation are designed for ‘locked’ healthcare solutions, whereas AI is flexible and evolves over time,” notes EY GSA Life Sciences Law Leader Heinz-Uwe Dettling. “Devices may need reauthorization if the AI continues to develop in a way that deviates from the manner predicted by the manufacturer.”

The coming years will undoubtedly see friction between AI innovation and regulation. As a broader regulatory framework materializes, those who embrace algorithm transparency could benefit from proactively leading the charge to build trust between solution providers, clinical teams, and patients.

Sign Up for Our Newsletter

Get the latest hybrid care news delivered to your inbox every month.