Teaching Robots to Navigate Hectic Emergency Rooms Is No Easy Task

  • Post author:
(Image: Getty)

Robots are already proving useful in assisting medical staff and supporting those who need in-home care, such as those with dementia. But just how autonomous can they actually be when it comes to providing care to vulnerable humans?

Ahead of her talks at next week’s International Conference on Robotics and Automation in China, and the We Robot conference in September, we spoke with Dr. Laurel Riek, Associate Professor in Computer Science and Engineering at the University of California San Diego (UCSD), who also holds joint appointments in the Department of Emergency Medicine and Contextual Robotics Institute. She walked us through her research on “embodied cueing” to help people with dementia and her work on bringing robotic software into emergency rooms.


As director of the Healthcare Robotics Lab, can you give us a brief overview of its focus, and the kinds of research you and your PhD students/postdocs carry out there?[LR] My research explores building robots that can sense, understand, and learn from real people in the real world. I am interested in fundamental and applied problems to enable autonomy for robots and intelligent systems, including robot perception, coordination, and long-term learning. We situate our work within healthcare, and solve problems that arise in community health, home health, and emergency medicine settings. 

Dr. Laurel Riek

Dr. Laurel Riek

A major research focus is on personalizing robots for health. Can you tell us about that?[LR] Yes, a recent focus of our lab is on designing new methods for robots to personalize their behavior to people and adapt to them. In healthcare, one size doesn’t fit all, so it’s important that: 1) stakeholders are well included in co-designing both the technology itself and the technology-based health intervention, and 2) the systems we build are able to adapt and accommodate differences and preferences in individuals to facilitate intervention effectiveness. 

We are very passionate about health equity, and ensuring any systems we build are as accessible and affordable as possible. Thus, we work closely with community health partners to help support this goal. 

Next week you’re presenting research findings at the International Conference on Robotics and Automation. Can you tell us about your Safety Critical Deep Q-NetworkSafety Critical Deep Q-Network (SafeDQN) project and its goals?[LR] We’ve been working for a number of years with healthcare workers in hospitals, and have found that one of the busiest and most crowded areas is the Emergency Department (ED). Even in pre-COVID times, patients may be situated in hallways for hours. There are also many different people doing many different things—physicians, nurses, technicians, EMTs, family members, etc., which not only adds to the crowdedness, but also adds additional cognitive load to the people within it. When building technology for an ED or any other safety-critical environment, it’s important to understand this context. 

In this paper, led by my PhD student Angelique Taylor, we explored the problem of how a robot could understand and model activity in the ED, particularly with regard to patient acuity. Here, we observed that a high-acuity patient (e.g., having a heart attack or stroke) is likely to have a higher number of healthcare workers around them who are moving quickly. We used this intuition to design our system, called SafeDQN, which allows robots to understand the kind of task healthcare workers are engaging in, so that they do not interrupt life-saving care delivery.

robot view of an emergency room

Can you give us an example of how the robot might be ‘taught’ to navigate the ED?[LR] For our evaluation, we simulated four scenarios (maps) where a robot was delivering supplies to a clinician in a busy ED. Each scenario included places where high-acuity patients were being treated in hallways, and others where clinicians might be treating low-acuity patients. The robot needed to determine the safest path that was also the quickest. We compared our system to three traditional robot-navigation methods that do not take patient acuity level into account. These include 1) RandomWalk, where a robot navigates by randomly selecting an action until it reaches its goal, 2) A*Search, which uses simple rules (heuristics) to find the shortest path, and 3) Dijkstra’s algorithm, which models the world as nodes in a graph, and then attempts to calculate the shortest graph. 

And SafeDQN proved more efficient each time?[LR] Yes. We found that SafeDQN generates the safest, quickest paths for mobile robots when navigating in a simulated ED environment. It was significantly better than Random Walk, A*, and Dijstra. To our knowledge, this is the first work that presents an acuity-aware navigation method for robots in safety-critical settings.

Is it true your PhD students trained the simulation AI system using YouTube videos?[LR] We trained the system using real-life documentaries, which feature clinicians treating patients in the emergency department. This was because due to privacy concerns we can’t put cameras in the ED. These videos were very helpful, because they mimic the types of situations robots will encounter in the real world (e.g., noise, occlusion, etc.)

When will SafeDQN be tested IRL?[LR] We plan to test it this summer in our medical simulation and training center. Here, we can physically recreate real-world ED scenarios that closely mimic reality. Next steps include testing the system on a physical robot in a realistic environment. We plan to partner with UC San Diego Health researchers who operate the campus’s healthcare training and simulation center. The algorithms could also be used outside of the emergency department, for example during search-and-rescue missions.

Another project you’re working on is designing assistive robotic technology for people with dementia (PwD). Can you explain?[LR] For the past four years we’ve been working closely with dementia caregivers, people with dementia, dementia community health workers, and clinicians. Our goal was to explore how (and if) technology might be helpful to reduce the burdens faced by the community. Many were already struggling, which has only gotten worse during COVID-19. In this paper, led by my student Connie Guan, we teamed up with these partners to understand how to build technology that reflects current best practices in dementia community health caregiving. 

What are some of the scenarios where robots can assist with this population? Your paper mentions ’embodied cueing.’ Tell us about that. [LR] We looked for situations where technology product designs could mimic, or support, optimum interactions to give people with dementia more agency over their care, rather than feel helpless. For example, asking: “How do you feel about breakfast?” rather than asking the PwD  to rely on their memory (“What did you have for breakfast?”). We also looked at task assistance. People with dementia get frustrated being told what to do, so we explored how technology could deescalate these situations, such as facilitating a soothing and sensory environment, like  music.  Embodied cueing is where the robot can give non-verbal, visual cues, mimicking an action, to help jumpstart a person’s action.

And the robot you built, Spoonbot, attempts to do this?[LR] Yes, it mimics eating, prompting the PwD  to pick up a spoon and follow along. 

Interestingly, Spoonbot looks like a 1950s-style radio. Many eldercare socio-emotive assistance robotseldercare socio-emotive assistance robots are ‘zoomorphic,’ meaning they look like animals or pets. But you chose a different form factor, can you explain why?[LR] From our earlier work, we knew that mealtimes were one of the most stressful situations for family caregivers, as people with dementia may need repeated prompts to eat. We also learned that playing music can help relieve stress and help with eating, especially when paired with mimicry. Finally, our earlier work showed the importance of familiarity in technology design. So we centered our ideation around these concepts, and ended up with an old-time radio which could demonstrate eating using a spoon. It also allows for different songs to be programmed in, so we chose Billboard hits from the 1950s. The idea was this was something that could sit on the dining room table during mealtimes.

Spoonbot

Spoonbot

Did you build Spoonbot in-house, or modify an existing robotics platform?[LR] We built it in-house. We collaborated with Dr. Tania Morimoto, who is very skilled at creating novel mechanical designs. The robot is made out of wood. Inside it has a microcontroller and some motors. We went through a few iterations, and are currently refining it further. 

In another paperpaper, you point out that, despite the benefits offered by robotics within elder care, there’s still low adoption. You also argue that technosolutionismaka ‘there’s a tech solution for that’is something to be wary of. Can you tell us how your approach avoids these pitfalls in terms of designing with the end user (or something else)?[LR] By engaging in visioning activities with a community and working closely with them, one can help ensure technology creation is well-aligned with the community’s needs and goals. However, that doesn’t necessarily address all issues of technosolutionism. In many situations there is a non-technical solution to a problem, such as by changing policies, providing more resources, or changing societal perspectives. A society that truly values caregivers—financially, socially, and emotionally—can make a huge difference in their quality of life and those that they care for. That said, there may be places for well-designed and well-contextualized health technology to meaningfully augment many aspects of life, particularly when it comes to access to care. 

Finally, the goal of robotics research for aging populations is to help people stay independent for as long as possible and avoid the ‘warehousing elders’ nursing home situation, which was so devastating in spreading COVID-19 amongst vulnerable populations. How close are we to commercially available robots in this space?[LR] Robots that can help with physical tasks like lifting and bathing are definitely on the near-term horizon. Some countries are already using them. Lifting mechanisms can help reduce injuries, which is a huge problem and major cause of disability. There are many affordable robotic pets that are in use in many contexts to support older adults. They can provide analogous pet-like comfort to those who might not be able to take care of a pet or who are not allowed to have one.

As for more complex levels of companionship, that’s a long way off, but really may not be a good idea due to the many ethical issues it raises. However, there’s definitely a need for more technologies to support ways for older adults to connect with others, such as via accessible (and privacy-preserving) social networking, telepresence, and so on. 

Source Link