Robotics

Using machine learning, robotic feeding system empowers users with mobility issues


This robotic feeding system trained with machine learning will transform lives, giving independence to those with severe mobility issues.

A team of researchers from Cornell University has developed a robotic feeding system that integrates machine learning, sensors that use several inputs, and computer vision to help feed people with severe mobility issues.

Robot-assisted feeding systems are already being utilized to greatly enhance the lives of users with mobility limitations. These systems are capable of picking up food and positioning it so users can lean forward and take a bite, but not all users have the ability to lean forward. 

Additionally, some people who would otherwise rely on these systems have restricted mouth movement and openings that prohibit their use. Other characteristics, such as sudden muscle spasms, can also pose challenges.

In these cases, users would benefit from a system capable of precision food placement and “in-mouth feeding” with utensils that can be guided by intentional tongue movements. 

The Cornell team had just such a system in mind, presenting their robot at the Human-Robot Interaction conference, held in March in Boulder, Colorado, which won the Best Demo Award.

“Feeding individuals with severe mobility limitations with a robot is difficult, as many cannot lean forward and require food to be placed directly inside their mouths,” said senior developer Tapomayukh “Tapo” Bhattacharjee, an assistant professor of computer science at Cornell’s Ann S. Bowers College of Computing and Information Science. “The challenge intensifies when feeding individuals with additional complex medical conditions.”

Feeding challenges are food for thought

In developing this robotic feeding system, the team faced the significant challenge of teaching a machine the complex process of how humans feed themselves, something that we often take for granted. 

This includes the system identifying various food items on a plate, picking them up with a utensil, and then transferring them precisely inside the user’s mouth. Bhattacharjee pointed out that the most challenging stage of this operation is around the final 2 inches (5 centimeters) of the approach to the user’s mouth. 

The system also needs to be able to account for the fact that some users have mouths that are less than an inch (around 2 centimeters) wide, and it has to be capable of accounting for unexpected muscle spasms that could occur during utensil approach or even when the utensil is inside the user’s mouth.

Additionally, the team determined that it would be desirable for the user to indicate to the system with their tongues which specific regions of their mouth are able to bite food.

“Current technology only looks at a person’s face once and assumes they will remain still, which is often not the case and can be very limiting for care recipients,” said paper lead author and Cornell computer science doctoral student Rajat Kumar Jenamani.

The team’s system addresses these challenges in two main ways: The feeding robot is capable of tracking a user’s mouth in real-time, which means it can adjust to sudden movements.

This capability is boosted by a dynamic response mechanism, which means the system can quickly react to changes in physical interactions between the user’s mouth and the feeding utensil. This makes the system able to tell the difference between an intentional bite and a sudden and unintentional spasm.

Of course, with any system like this, the ultimate validation is testing with human users.

The proof is in the pudding

The robotic element of the system takes the form of a multi-jointed arm capable of holding a custom-built utensil and being able to sense the forces acting on it. 

The mouth-tracking aspect of the system was trained using thousands of images of head positions and facial expressions. These were collected by two cameras, one below the custom utensil, and one above it. These assist in detecting the mouth position of the user and also in observing obstructions caused by the utensil itself.

After training the system, the team set about demonstrating the efficacy of the individual components of their system in two separate studies. They then performed a full system evaluation with 13 care recipients with diverse mobility challenges.

The tests took place across three locations: the EmPRISE Lab on the Cornell Ithaca campus, a medical center in New York City, and a care recipient’s home in Connecticut.

“This is one of the most extensive real-world evaluations of any autonomous robot-assisted feeding system with end-users,” Bhattacharjee said. “It’s amazing and very, very fulfilling.”

The team deemed the system testing a success, pointing out that participants consistently emphasized the comfort and safety of the inside-mouth bite transfer system.

Test users also gave the robotic feeding system high technology acceptance ratings, which the team said underscores its transformative potential in real-world scenarios. “We’re empowering individuals to control a 20-pound robot with just their tongue,” Jenamani said. 

Though these results are promising, the team must now conduct further research to assess the long-term usability of the system. 

Demonstrating the capacity of the system to really change lives, Jenaman described the raw emotion of observing as parents of a daughter with a rare birth defect called schizencephaly quadriplegia saw her successfully feed herself with the aid of the system.

“It was a moment of real emotion; her father raised his cap in celebration, and her mother was almost in tears,” Jenamani concluded.

Reference: R. K. Jenamani., et al, Feel the Bite: Robot-Assisted Inside-Mouth Bite Transfer using Robust Mouth Perception and Physical Interaction-Aware Control, HRI ’24: Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, [2024]

Feature image credit: Bharath Sriraam on Unsplash



Source

Related Articles

Back to top button