Category: Healthcare AI
Healthcare AI and the Push for Transparency
Efforts to bring regulatory oversight and transparency to healthcare AI received a push from notable advocacy groups recently.
The Coalition for Health AI
In October 2022, members of the Coalition for Health AI (CHAI) convened to finalize regulatory framework recommendations on the responsible use of artificial intelligence. ONC recently joined the FDA, NIH, and the White House Office of Science and Technology Policy (OSTP) as federal observers of the coalition, which counts Johns Hopkins University, Mayo Clinic, Google, and Microsoft among its members.
CHAI announced plans to share its recommendations, culled from healthcare stakeholder workshops and public feedback, by the end of the year. The organization aims to identify priority areas that require guidance to ensure equity in healthcare AI research, technology, and policy. Healthcare IT News reports that CHAI researchers are also developing an online curriculum to support standards-based training on AI development, support, and maintenance.
The White House OSTP
CHIA’s news came on the heels of the White House OSTP introducing its broader Blueprint for an AI Bill of Rights. The Blueprint identifies five guidelines for the design, use, and deployment of automated systems that seek to protect Americans, including:
- Safe and effective systems – Diverse stakeholder and expert feedback; testing and risk mitigation; evaluation and reporting
- Algorithmic discrimination protections – Proactive equity assessment during design; representative data; disparity testing
- Data privacy – Patient agency over how data is used; data is secure and only used for necessary functions
- Notice and explanation – Patient notification of AI and how and why it contributes to outcomes
- Human alternatives, consideration, and fallback – Patient opportunity to opt out; human alternatives if system fails or patient opts out
The framework applies to automated systems that “have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services,” including healthcare.
The FDA
Clinical Decision Support (CDS) software guidance issued by the FDA in late September 2022 includes more explicit recommendations related to the use of AI in healthcare. The FDA recommends that CDS Software-as-a-Medical-Device (SaMD) solutions provide plain language descriptions of underlying algorithms, data sets, and research validation methods, including:
- A summary of the logic and methods used to provide clinical recommendations (e.g., meta-analysis of clinical studies, expert panel, statistical modeling, AI/ML techniques)
- A description of data sources used so providers can assess if data is representative of patient populations
- A description of the results from clinical studies conducted to validate the algorithm and recommendations so providers can assess potential performance and limitations (such as missing patient data or highly variable algorithm performance among sub-populations)
You can find a list of software functions this would impact here.
Establishing Trust in Healthcare AI
Each of these initiatives seeks to contribute to a more comprehensive regulatory framework for healthcare AI and offers a glimpse into what is likely ahead for the flourishing – and currently largely unregulated – field.
These tools hold tremendous potential clinically (i.e., disease prediction) and operationally (i.e., process automation). From enterprise imaging workflow support to advanced video analysis for patient fall detection, providers are eager to leverage AI to drive efficiency in care delivery. There is, however, growing awareness of the potential for bias in underlying algorithms, which can lead to health inequity.
Stakeholders are calling for transparency in healthcare AI algorithms, and rightly so. The kind of “explained AI” that CHAI, the White House, and the FDA are championing would pave the way for new regulatory frameworks that foster trust for clinicians and patients and accountability for vendors.
“Existing models of regulation are designed for ‘locked’ healthcare solutions, whereas AI is flexible and evolves over time,” notes EY GSA Life Sciences Law Leader Heinz-Uwe Dettling. “Devices may need reauthorization if the AI continues to develop in a way that deviates from the manner predicted by the manufacturer.”
The coming years will undoubtedly see friction between AI innovation and regulation. As a broader regulatory framework materializes, those who embrace algorithm transparency could benefit from proactively leading the charge to build trust between solution providers, clinical teams, and patients.
How AI will Transform Virtual Patient Sitting
Just a few years ago, virtual sitting was being touted as one of the most important technology implementations a hospital could make to reduce costs and improve patient safety. Hospitals could install cameras into patient rooms, or make use of existing video-enabled carts, and then set up a remote monitoring center where patient sitters keep an eye on patients in multiple rooms (or even multiple facilities) all at once.
Yet such is the pace of change in information technology that even the traditional patient sitter model is now ready for a transformation. How? Enter “augmented intelligence,” commonly known as artificial intelligence, based on machine learning.
What you need to know about the existing virtual sitter model
The current virtual patient observation model is a proven, cost-effective strategy to replace in-person sitters, or hospital staff who sit in rooms with patients who are at risk of harming themselves, such as when attempting to get out of bed unattended.
As Caregility’s Donna Gudmestad wrote last year:
Virtual patient observation can be used in a variety of settings but is key to helping hospitals avoid costs from fall injuries. Every year hundreds of thousands of patients fall in hospitals, with one-third resulting in serious injury. The Joint Commission estimates that, on average, a fall with injury costs $14,000, but depending on the severity of the injury, unreimbursed costs for treating a single hospital-related fall injury can be up to $30,000.
Yet the existing model also has shortcomings.
To start, these systems tend to send a high number of false alerts to the remote patient sitters. According to a study published by the Journal of Healthcare Informatics Research in 2016, medical professionals in hospitals can encounter more than 700 alarms in a single day – making it difficult to differentiate a true emergency. ECRI listed false alarms among its top 10 health technology hazards in 2020. With so many false alerts, patient sitter fatigue becomes very real and dangerous as human monitors begin to tune out alarms.
Next, the current approach to virtual sitting usually only focuses on a rectangular area around the patient bed, unable to monitor or analyze what else is in the room.
With a new generation of machine learning-enabled video analysis software paired with video monitoring technology, we have the potential to solve these core problems.
What is Augmented Intelligence or “AI”?
It’s important to note that nothing can replace a knowledgeable, experienced caregiver, but how much more effective can they be if we augment the information they have at their fingertips?
This is precisely where AI comes in.
AI can be applied to the video recordings taken in most hospital patient rooms to better categorize alarms related to movement in those rooms. Augmented Video Analysis (AVA) systems can provide additional information and data to hospital decision makers, resulting in more accurate warnings and alerts, among other benefits.
AVA leverages real-time video and experiential knowledge to learn what is going on in the room, and alert clinical staff if help is needed. Over time, AVA learns how to better observe not just that one patient, but all patients, everywhere. It learns continually—and it doesn’t get tired.
What can AVA do that existing virtual sitting systems can’t?
As mentioned, the current systems have their shortcomings. But the AVA system uses unique algorithms (or sets of rules) for the computer to follow. The algorithms can determine the severity and/or scope of the action in the room.
The more video the system receives, the more detail it can detect, and the more accurate assessment it can provide. Has the patient fallen out of bed – or just leaned over to pick something up off the floor? Did a visitor reach to hold a patient’s hand– or did they pull out an IV?
Some distinguishing features of AVA systems include:
- Identifying “regions of interest” that provide a holistic view of the patient room, rather than the traditional rectangular “static box”
- Differentiating a caregiver, patient, or visitor – and applying different rules for each persona
- Producing “bounding boxes” around each object in the room and indicating when they interact (for example, when a visitor touches a patient or a patient touches an IV pump)
Protecting patient privacy
With all this capability to observe and analyze what is going on in a room, AVA can still be configured to protect the patients’ privacy.
First, AVA de-identifies patients by blurring their faces. Second, the cameras only capture video—the software does not listen in on conversations. And third, all data is captured and stored in a secure, HIPAA compliant system.
Built to leverage existing infrastructure and grow over time
The new generation of advanced video analysis can make use of video technology that the hospital or health system is already invested in, whether that is carts, wall-mounted video cameras, or another system.
Plus, AI can be trained on and applied to new problems identified over time, or new risks. Whatever challenge your current virtual sitting solution is facing, there is a good chance that AVA can help.
Interested in learning more about AVA? Download our whitepaper, “How Augmented Video Analysis Is Improving Patient Care— and More” now to learn how:
- AI and machine learning work hand-in-hand with video systems
- AVA systems function in a hospital room
- Patient privacy can be protected using AVA systems
- AVA systems can benefit patient care and your bottom line.
Augmented Intelligence in Telehealth Holds Promise for Health Systems
If 2020 was the year that health systems embraced telehealth out of necessity and then discovered its many benefits, what might 2021 and beyond hold?
For health systems looking to further improve the cost savings and other advantages of telehealth, the new horizon is augmented intelligence: or, the use of artificial intelligence (AI) tools, such as machine learning, to assist and augment the capabilities of medical teams.
These tools can help with both routine administrative tasks and higher-level work, such as diagnosis, treatment, and patient monitoring.
Below we look at just a few of the helpful augmented intelligence tools that already exist in telehealth, preview potential future applications of augmented intelligence, and advise health systems on how best to take advantage of this new era in medical innovation.
Examples of augmented intelligence tools in telehealth and remote patient monitoring devices
The last few years have brought to market many remote patient monitoring devices that utilize augmented intelligence, enabling both hospital care staff and physicians to focus on other tasks while knowing that their patients are being continuously evaluated.
For example, EarlySense offers a sensor that is placed under a patient’s mattress and tracks multiple data points, including heart and respiratory rates. The sensor uses AI to analyze this continuous data stream and to detect early signs of deterioration, which the care team can then correct.
Similarly, Myia collects data from at-home patients with chronic conditions and uses machine learning to surface patients needing a clinical intervention.
Somatix offers the SafeBeing system, which is a remote patient monitoring device that relies on a wearable that uses AI to monitor gestures and passively collect biofeedback data. Somatix’s cloud-based system analyzes this data in real time to provide insights and alerts, such as an increased fall risk, for the care team. Somatix’s system works well for nursing homes and long-term care facilities.
Other companies are using AI to develop a more holistic portrait of patients’ health. Recognizing that clinical care accounts for only a small percentage of a patient’s health, with social determinants of health and behavior being major factors contributing to wellness, Innovaccer developed an AI-driven social vulnerability index that helps health systems see a fuller picture of both individual and population health.
We have also seen this type of technology being used to help with administrative tasks throughout various AI healthcare workflows.
For example, natural language processing, augmented by AI, can be used not only to transcribe patient-provider conversations during phone or video visits, but to assess which were the most salient points of the interaction and worthy of further attention. The resulting notes inform the provider’s care plan and also remind the patient of what was discussed.
In addition, as the Advisory Board wrote last year, at Providence St. Joseph Health in Washington and other health systems, system administrators have deployed chatbots to screen patients and direct them to the right resources, thereby discouraging the so-called worried well from unnecessarily coming into hospitals.
Future possibilities for augmented intelligence in healthcare
The possible future applications of augmented intelligence or AI in healthcare workflows are limited only by our collective imagination.
A Government Accountability Office Report envisioned that dermatology video visits may one day involve augmented intelligent patient care that assesses the patient’s skin for lesions and assists dermatologists in detecting precancerous and cancerous growths.
In this Becker’s Health IT article, a technology and data specialist with the University of California, Irvine, predicts that in the future individuals will have a digital health “twin” made up of all the data about an individual’s health. This twin’s data will continually be updated, and augmented intelligence tools will reveal health trends and trajectories for the individual as well as suggest personalized steps to better health.
Here, at Caregility, we predict that combining augmented intelligence with wearables and two-way video will be a game changer for virtual care, specifically when it comes to remote patient monitoring. Each of these components has had varied success to date in individual use cases, but when they are combined into a comprehensive virtual platform, providers will see the greatest benefit in improving care and reducing costs.
Taking advantage of the augmented intelligence revolution in telehealth systems
So, how can your health system benefit from all the latest applications in augmented intelligence in telehealth and be ready when new innovations reach the market?
To build a foundation for telehealth-enabled augmented intelligence technologies, the most critical step is adopting a flexible telehealth platform, capable of integrating with third-party apps and systems. Those who don’t start planning for the coming augmented intelligence healthcare transformation now may find themselves suddenly out-smarted and out-maneuvered: not by a human competitor, but by a learning machine.
For more on trends coming in the new year and beyond, check out our latest telehealth news roundup.