Telehealth News Roundup: Policy Updates Impacting Virtual Care
Legislative activity related to telehealth and AI picked up during the fourth quarter of 2023. Several new developments could have far-reaching implications for healthcare stakeholders. Here is a recap on some of the policy updates impacting virtual care.
Expanding grants for AI research in vital areas like healthcare
In the EO, the Administration addresses both the transformative potential and risks of AI, outlining broad actions to govern the development and use of the technology. Legal consultants with McDermott Will & Emery (MWE) offered a great deep-dive into the healthcare-specific points in the EO, including these important milestones:
Protecting Consumers, Patients, Passengers and Students
Within 90 days of the EO’s publication, the HHS Secretary is required to establish an HHS AI Task Force.
Within one year of establishment, the Task Force is required to develop a strategic plan on policies and frameworks for the responsible deployment of AI in healthcare.
Within 180 days of the EO’s publication, the HHS Secretary is required todevelop a strategy to determine whether health AI technologies maintain appropriate levels of quality.
Within 365 days of the date of this order, the HHS Secretary is required to establish anAI safety programthat establishes acommon framework for approaches to identify and capture clinical errors resulting from AIdeployed in healthcare settings.
Promoting Innovation
Within 90 days of the EO’s publication, in coordination with the heads of agencies that the director of the US National Science Foundation (NSF) deems appropriate,launch a pilot program implementing the National AI Research Resource (NAIRR).
Within 540 days of the EO’s publication, the director of NSF should establish at leastfour new national AI research institutes and identify grantmaking opportunities to support responsible AI development and use.
As MWE put it, “This is a pivotal moment for AI governance. Many of the key, material details and AI governance standards will be developed during the next six months to one year. For organizations interested in developing or using AI or machine learning tools in healthcare, there will be far-reaching implications as new standards, compliance expectations, and other guidelines emerge.”
Controlled Substance Flexibilities
Also of note in October 2023, the Drug Enforcement Agency (DEA) and the Department of Health and Human Services (HHS) once again extended flexibilities related to the prescription of controlled substances via telehealth through the end of 2024.
A practitioner can prescribe a controlled substance to a patient using telemedicine, even if the patient isn’t at a hospital or clinic registered with the DEA.
Qualifying practitioners can prescribe buprenorphine to new and existing patients with opioid use disorder based on a telephone evaluation.
As reported by Healthcare Dive, this marks the second extension of relaxed prescribing rules, making it possible for clinicians to prescribe drugs like opioid use disorder and ADHD medications without first conducting an in-person evaluation.
With many of the telehealth flexibilities enacted during the pandemic set to expire at the end of next year, 2024 is “shaping up to be the Super Bowl for telehealth,” as Kyle Zebley, the American Telemedicine Association (ATA) senior vice president for public policy and executive director of ATA Action, put it.
During the hearing, four healthcare providers working in telehealth offered expert witness testimonies outlining essential flexibilities to make permanent:
Allowing video visits for all conditions for all Medicare beneficiaries.
Allowing physicians to provide care and services to patients via audio-only modalities.
Expanding beyond qualified healthcare centers to allow licensed physical therapy, occupational therapy, and speech-language pathology practitioners to utilize telehealth services.
Opinions still differ on the future of telehealth payment parity. Some recommend reimbursing telehealth services at a lower rate to avoid market distortions while others argue that providers will cease offering these services without parity. As the Center for Telehealth and e-Health Law (CTeL) put it in a recent summary of the hearing, “Oftentimes, without the option of services provided via telehealth, patients are left with the “choice” of no care at all. Which isn’t really a choice at all.”
Telehealth News Roundup: Policy Updates Impacting Virtual Care
Legislative activity related to telehealth and AI picked up during the fourth quarter of 2023. Several new developments could have far-reaching implications for healthcare stakeholders. Here is a recap on some of the policy updates impacting virtual care.
Expanding grants for AI research in vital areas like healthcare
In the EO, the Administration addresses both the transformative potential and risks of AI, outlining broad actions to govern the development and use of the technology. Legal consultants with McDermott Will & Emery (MWE) offered a great deep-dive into the healthcare-specific points in the EO, including these important milestones:
Protecting Consumers, Patients, Passengers and Students
Within 90 days of the EO’s publication, the HHS Secretary is required to establish an HHS AI Task Force.
Within one year of establishment, the Task Force is required to develop a strategic plan on policies and frameworks for the responsible deployment of AI in healthcare.
Within 180 days of the EO’s publication, the HHS Secretary is required todevelop a strategy to determine whether health AI technologies maintain appropriate levels of quality.
Within 365 days of the date of this order, the HHS Secretary is required to establish anAI safety programthat establishes acommon framework for approaches to identify and capture clinical errors resulting from AIdeployed in healthcare settings.
Promoting Innovation
Within 90 days of the EO’s publication, in coordination with the heads of agencies that the director of the US National Science Foundation (NSF) deems appropriate,launch a pilot program implementing the National AI Research Resource (NAIRR).
Within 540 days of the EO’s publication, the director of NSF should establish at leastfour new national AI research institutes and identify grantmaking opportunities to support responsible AI development and use.
As MWE put it, “This is a pivotal moment for AI governance. Many of the key, material details and AI governance standards will be developed during the next six months to one year. For organizations interested in developing or using AI or machine learning tools in healthcare, there will be far-reaching implications as new standards, compliance expectations, and other guidelines emerge.”
Controlled Substance Flexibilities
Also of note in October 2023, the Drug Enforcement Agency (DEA) and the Department of Health and Human Services (HHS) once again extended flexibilities related to the prescription of controlled substances via telehealth through the end of 2024.
A practitioner can prescribe a controlled substance to a patient using telemedicine, even if the patient isn’t at a hospital or clinic registered with the DEA.
Qualifying practitioners can prescribe buprenorphine to new and existing patients with opioid use disorder based on a telephone evaluation.
As reported by Healthcare Dive, this marks the second extension of relaxed prescribing rules, making it possible for clinicians to prescribe drugs like opioid use disorder and ADHD medications without first conducting an in-person evaluation.
With many of the telehealth flexibilities enacted during the pandemic set to expire at the end of next year, 2024 is “shaping up to be the Super Bowl for telehealth,” as Kyle Zebley, the American Telemedicine Association (ATA) senior vice president for public policy and executive director of ATA Action, put it.
During the hearing, four healthcare providers working in telehealth offered expert witness testimonies outlining essential flexibilities to make permanent:
Allowing video visits for all conditions for all Medicare beneficiaries.
Allowing physicians to provide care and services to patients via audio-only modalities.
Expanding beyond qualified healthcare centers to allow licensed physical therapy, occupational therapy, and speech-language pathology practitioners to utilize telehealth services.
Opinions still differ on the future of telehealth payment parity. Some recommend reimbursing telehealth services at a lower rate to avoid market distortions while others argue that providers will cease offering these services without parity. As the Center for Telehealth and e-Health Law (CTeL) put it in a recent summary of the hearing, “Oftentimes, without the option of services provided via telehealth, patients are left with the “choice” of no care at all. Which isn’t really a choice at all.”
Efforts to bring regulatory oversight and transparency to healthcare AI received a push from notable advocacy groups recently.
The Coalition for Health AI
In October 2022, members of the Coalition for Health AI (CHAI) convened to finalize regulatory framework recommendations on the responsible use of artificial intelligence. ONC recently joined the FDA, NIH, and the White House Office of Science and Technology Policy (OSTP) as federal observers of the coalition, which counts Johns Hopkins University, Mayo Clinic, Google, and Microsoft among its members.
CHAI announced plans to share its recommendations, culled from healthcare stakeholder workshops and public feedback, by the end of the year. The organization aims to identify priority areas that require guidance to ensure equity in healthcare AI research, technology, and policy. Healthcare IT News reports that CHAI researchers are also developing an online curriculum to support standards-based training on AI development, support, and maintenance.
The White House OSTP
CHIA’s news came on the heels of the White House OSTP introducing its broader Blueprint for an AI Bill of Rights. The Blueprint identifies five guidelines for the design, use, and deployment of automated systems that seek to protect Americans, including:
Safe and effective systems – Diverse stakeholder and expert feedback; testing and risk mitigation; evaluation and reporting
Data privacy – Patient agency over how data is used; data is secure and only used for necessary functions
Notice and explanation – Patient notification of AI and how and why it contributes to outcomes
Human alternatives, consideration, and fallback – Patient opportunity to opt out; human alternatives if system fails or patient opts out
The framework applies to automated systems that “have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services,” including healthcare.
The FDA
Clinical Decision Support (CDS) software guidance issued by the FDA in late September 2022 includes more explicit recommendations related to the use of AI in healthcare. The FDA recommends that CDS Software-as-a-Medical-Device (SaMD) solutions provide plain language descriptions of underlying algorithms, data sets, and research validation methods, including:
A summary of the logic and methods used to provide clinical recommendations (e.g., meta-analysis of clinical studies, expert panel, statistical modeling, AI/ML techniques)
A description of data sources used so providers can assess if data is representative of patient populations
A description of the results from clinical studies conducted to validate the algorithm and recommendations so providers can assess potential performance and limitations (such as missing patient data or highly variable algorithm performance among sub-populations)
You can find a list of software functions this would impact here.
Establishing Trust in Healthcare AI
Each of these initiatives seeks to contribute to a more comprehensive regulatory framework for healthcare AI and offers a glimpse into what is likely ahead for the flourishing – and currently largely unregulated – field.
These tools hold tremendous potential clinically (i.e., disease prediction) and operationally (i.e., process automation). From enterprise imaging workflow support to advanced video analysis for patient fall detection, providers are eager to leverage AI to drive efficiency in care delivery. There is, however, growing awareness of the potential for bias in underlying algorithms, which can lead to health inequity.
Stakeholders are calling for transparency in healthcare AI algorithms, and rightly so. The kind of “explained AI” that CHAI, the White House, and the FDA are championing would pave the way for new regulatory frameworks that foster trust for clinicians and patients and accountability for vendors.
“Existing models of regulation are designed for ‘locked’ healthcare solutions, whereas AI is flexible and evolves over time,” notes EY GSA Life Sciences Law Leader Heinz-Uwe Dettling. “Devices may need reauthorization if the AI continues to develop in a way that deviates from the manner predicted by the manufacturer.”
The coming years will undoubtedly see friction between AI innovation and regulation. As a broader regulatory framework materializes, those who embrace algorithm transparency could benefit from proactively leading the charge to build trust between solution providers, clinical teams, and patients.