MIT’s new AI tech could make limbless, slimy, squishy robots a reality
Researchers have long been working on creating a robot capable of fluidly altering its shape to navigate tight spaces.
Such a technology holds promise for applications such as deploying these robots inside the human body to extract unwanted objects.
Now, a control algorithm created by MIT researchers can automatically learn how to move, stretch, and shape a reconfigurable robot to accomplish a given task—even if it necessitates the robot changing its morphology more than once.
The MIT team has also constructed a simulator to evaluate deformable soft robot control algorithms on various difficult, shape-changing tasks.
Their approach outperformed other algorithms and finished all eight tasks they analyzed. The method performed particularly well on jobs with multiple facets.
Although reconfigurable soft robots are still in their early stages, researchers say this method may one day make possible general-purpose robots that can modify their shapes to carry out various activities.
The details of the team’s research were published in the journal ArXiv.
Reinforcement learning in shape-shifting robotics
Researchers frequently use reinforcement learning, a machine learning technique that involves trial and error and rewards the robot for behaviors that get it closer to an objective, to teach robots to accomplish tasks.
However, magnetic field control allows shape-shifting robots to squish, bend, or extend their entire body dynamically. “Such a robot could have thousands of small pieces of muscle to control, so it is very hard to learn in a traditional way,” said Boyuan Chen, a graduate student from MIT and co-author of the study, in a statement.
The team had to approach this challenge in a new way to find a solution. Their reinforcement learning system starts by learning to regulate groups of neighboring muscles that operate together instead of moving each little muscle independently.
The algorithm then goes into even more depth to optimize the policy or action plan it has learned after first exploring the universe of potential actions by concentrating on muscle groups. In this way, the control algorithm adheres to a coarse-to-fine methodology.
“Coarse-to-fine means that when you take a random action, that random action is likely to make a difference. The change in the outcome is likely very significant because you coarsely control several muscles at the same time,” said Vincent Sitzmann, an assistant professor at MIT.
Researchers applied machine learning to map a robot’s movement possibilities in its surroundings, similar to image processing. Their model generates a 2D action space by simulating robot motion using a material-point method, treating points like pixels in an image.
This approach considers spatial correlations, recognizing that points near each other exhibit similar movements. Moreover, the model predicts optimal robot actions by analyzing the environment, enhancing efficiency in navigating constrained spaces.
Algorithm success in shape modification tasks
The researchers, after having developed this method, required a means of testing it, so they built a simulation environment that they named DittoGym.
DittoGym comprises eight activities that assess the capacity of a reconfigurable robot to alter its shape dynamically. In one, the robot needs to lengthen and twist its body to pass through obstructions and arrive at its destination. In another, it has to reshape itself to resemble alphabetic characters.
“Our task selection in DittoGym follows both generic reinforcement learning benchmark design principles and the specific needs of reconfigurable robots, said Huang. Suning Huang, an undergraduate student at Tsinghua University in China, completed this work while a visiting student at MIT.
Each task is designed to represent certain properties that we deem important, such as the ability to navigate through long-horizon explorations, analyze the environment, and interact with external objects.
According to researchers, the algorithm worked better than baseline approaches and was the only one that could finish multistage assignments requiring several shape modifications. “We have a stronger correlation between action points that are closer to each other, and I think that is key to making this work so well,” said Chen.
Though the deployment of shape-shifting robots in practical settings may be distant, the team aims to inspire fellow scientists. Their endeavor not only delves into reconfigurable soft robotics but also encourages the exploration of 2D action spaces for tackling complex control challenges.
Abstract
Robot co-design, where the morphology of a robot is optimized jointly with a
learned policy to solve a specific task, is an emerging area of research. It holds
particular promise for soft robots, which are amenable to novel manufacturing
techniques that can realize learned morphologies and actuators. Inspired by nature and recent novel robot designs, we propose to go a step further and explore the novel reconfigurable robots, defined as robots that can change their morphology within their lifetime. We formalize the control of reconfigurable soft robots as a high-dimensional reinforcement learning (RL) problem. We unify morphology change, locomotion, and environment interaction in the same action space and introduce an appropriate, coarse-to-fine curriculum that enables us to discover policies that accomplish fine-grained control of the resulting robots. We also introduce DittoGym, a comprehensive RL benchmark for reconfigurable soft robots that require fine-grained morphology changes to accomplish the tasks. Finally, we evaluate our proposed coarse-to-fine algorithm on DittoGym and demonstrate robots that learn to change their morphology several times within a sequence, uniquely enabled by our RL algorithm.
ABOUT THE EDITOR
Jijo Malayil Jijo is an automotive and business journalist based in India. Armed with a BA in History (Honors) from St. Stephen’s College, Delhi University, and a PG diploma in Journalism from the Indian Institute of Mass Communication, Delhi, he has worked for news agencies, national newspapers, and automotive magazines. In his spare time, he likes to go off-roading, engage in political discourse, travel, and teach languages.