Skip to Main Content
Schedule a Demo Contact Us

How AI will Transform Virtual Patient Sitting

Just a few years ago, virtual sitting was being touted as one of the most important technology implementations a hospital could make to reduce costs and improve patient safety. Hospitals could install cameras into patient rooms, or make use of existing video-enabled carts, and then set up a remote monitoring center where patient sitters keep an eye on patients in multiple rooms (or even multiple facilities) all at once.

Yet such is the pace of change in information technology that even the traditional patient sitter model is now ready for a transformation. How? Enter “augmented intelligence,” commonly known as artificial intelligence, based on machine learning.

What you need to know about the existing virtual sitter model

The current virtual patient observation model is a proven, cost-effective strategy to replace in-person sitters, or hospital staff who sit in rooms with patients who are at risk of harming themselves, such as when attempting to get out of bed unattended.

As Caregility’s Donna Gudmestad wrote last year:

Virtual patient observation can be used in a variety of settings but is key to helping hospitals avoid costs from fall injuries. Every year hundreds of thousands of patients fall in hospitals, with one-third resulting in serious injury. The Joint Commission estimates that, on average, a fall with injury costs $14,000, but depending on the severity of the injury, unreimbursed costs for treating a single hospital-related fall injury can be up to $30,000.

Yet the existing model also has shortcomings.

To start, these systems tend to send a high number of false alerts to the remote patient sitters. According to a study published by the Journal of Healthcare Informatics Research in 2016, medical professionals in hospitals can encounter more than 700 alarms in a single day – making it difficult to differentiate a true emergency. ECRI listed false alarms among its top 10 health technology hazards in 2020. With so many false alerts, patient sitter fatigue becomes very real and dangerous as human monitors begin to tune out alarms.

Next, the current approach to virtual sitting usually only focuses on a rectangular area around the patient bed, unable to monitor or analyze what else is in the room.

With a new generation of machine learning-enabled video analysis software paired with video monitoring technology, we have the potential to solve these core problems.

What is Augmented Intelligence or “AI”?

It’s important to note that nothing can replace a knowledgeable, experienced caregiver, but how much more effective can they be if we augment the information they have at their fingertips?

This is precisely where AI comes in.

AI can be applied to the video recordings taken in most hospital patient rooms to better categorize alarms related to movement in those rooms. Augmented Video Analysis (AVA) systems can provide additional information and data to hospital decision makers, resulting in more accurate warnings and alerts, among other benefits.

AVA leverages real-time video and experiential knowledge to learn what is going on in the room, and alert clinical staff if help is needed. Over time, AVA learns how to better observe not just that one patient, but all patients, everywhere. It learns continually—and it doesn’t get tired.

What can AVA do that existing virtual sitting systems can’t?

As mentioned, the current systems have their shortcomings. But the AVA system uses unique algorithms (or sets of rules) for the computer to follow. The algorithms can determine the severity and/or scope of the action in the room.

The more video the system receives, the more detail it can detect, and the more accurate assessment it can provide. Has the patient fallen out of bed – or just leaned over to pick something up off the floor? Did a visitor reach to hold a patient’s hand– or did they pull out an IV?

Some distinguishing features of AVA systems include:

  • Identifying “regions of interest” that provide a holistic view of the patient room, rather than the traditional rectangular “static box”
  • Differentiating a caregiver, patient, or visitor – and applying different rules for each persona
  • Producing “bounding boxes” around each object in the room and indicating when they interact (for example, when a visitor touches a patient or a patient touches an IV pump)

Protecting patient privacy

With all this capability to observe and analyze what is going on in a room, AVA can still be configured to protect the patients’ privacy.

First, AVA de-identifies patients by blurring their faces. Second, the cameras only capture video—the software does not listen in on conversations. And third, all data is captured and stored in a secure, HIPAA compliant system.

Built to leverage existing infrastructure and grow over time

The new generation of advanced video analysis can make use of video technology that the hospital or health system is already invested in, whether that is carts, wall-mounted video cameras, or another system.

Plus, AI can be trained on and applied to new problems identified over time, or new risks. Whatever challenge your current virtual sitting solution is facing, there is a good chance that AVA can help.

Interested in learning more about AVA? Download our whitepaper, “How Augmented Video Analysis Is Improving Patient Care— and More” now to learn how:

  • AI and machine learning work hand-in-hand with video systems
  • AVA systems function in a hospital room
  • Patient privacy can be protected using AVA systems
  • AVA systems can benefit patient care and your bottom line.

Sign Up for Our Newsletter

Get the latest hybrid care news delivered to your inbox every month.