Man using an Obi feeding robot in a study where participants can issue verbal commands, captured by the microphone.

An Obi feeding robot in a study where participants can issue verbal commands, captured by the microphone. [Credit]

We are getting older, faster.

According to the US Census, about 1 in 6 people were ages 65 or older in 2020, and more than 70 million adults reported having a disability in 2022. With a growing portion of our community requiring physical assistance for daily activities, this problem calls for a larger, more prepared caregiving workforce—one that needs to leverage the technological advantages of caregiving robots.

The Robotic Caregiving and Human Interaction (RCHI) lab at CMU is a research group that’s decided to tackle this challenge head on. Their work primarily focuses on developing physical assistance robots that help with simple tasks such as feeding, dressing, and manipulating blankets around the body.

What’s new?

At the heart of many projects at RCHI is the use of machine learning techniques to further enhance the capabilities of existing technologies and systems. Here are some cool projects that they’ve worked on!

🧘🏻Inferring human 3D poses from a pressure image

Previous work have been limited to only inferring 2D pose estimates from simple pressure images. However, this RCHI project shows that determining 3D join positions is possible through the use of convolutional neural networks. Though there are still considerations to keep in mind regarding the representation of data on body poses and uncertainty, the findings are valuable for developing bed-side assistance technology.

💬 VoicePilot: an LLM integrated speech interface for feeding robots

Speech is often a preferable option for communicating needs and preferences for those with motor impairments. This idea led to the development of VoicePilot, an LLM based speech interface that transcribes verbal commands into corresponding python code. Using three primary functions that control the Obi feeding robot, this new speech interface enables the user to customize the robot behavior to their preferences, such as feeding speed and scoop depth.

🕶️ Assistive VR Gym to improve simulation-trained assistive robots

Image of four VR simulations for training assistive robots, from top to bottom: feeding and drinking assistance, itch scratching and bed bathing assistance.

Four available simulations: feeding, drinking, itch scratching and bed bathing assistance. [Credits]

How can we provide opportunities for robots to safely learn how to physically assist its users while mitigating potential harm? According to this research paper, virtual reality is the answer. Erikson et al. created the open source framework Assistive VR Gym (AVR Gym), allowing humans to interact with virtual assistive robot safely in a physics simulation. It’s been shown that AVR Gym does indeed improve the performance of assistive robots trained in simulated environments.