MIT robot could help people with limited mobility dress themselves

It allows for "safe impacts" in order to dress a person more efficiently.


Robots have plenty of potential to help people with limited mobility, including models that could help the infirm put on clothes. That's a particularly challenging task, however, that requires dexterity, safety and speed. Now, scientists at MIT CSAIL have developed an algorithm that strikes a balance by allowing for non-harmful impacts rather than not permitting any impacts at all as before.

Humans are hardwired to accommodate and adjust to other humans, but robots have to learn all that from scratch. For example, it's relatively easy for a person to help someone else dress, as we know instinctively where to hold the clothing item, how people can bend their arms, how cloth reacts and more. However, robots have to be programmed with all that information.

In the past, algorithms have prevented robots from making any impact with humans at all in the interest of safety. However, that can lead to something called the "freezing robot" problem, where the robot essentially stops moving and can't accomplish the task it set out to do.

To get past that issue, an MIT CSAIL team led by PhD student Shen Li developed an algorithm that redefines robotic motion safety by allowing for "safe impacts" on top of collision avoidance. This lets the robot make non-harmful contact with a human to achieve its task, as long as its impact on the human is low.

"Developing algorithms to prevent physical harm without unnecessarily impacting the task efficiency is a critical challenge," said Li. "By allowing robots to make non-harmful impact with humans, our method can find efficient robot trajectories to dress the human with a safety guarantee."

For a simple dressing task, the system worked even if the person was doing other activities like checking a phone, as shown in the video above. It does that by combining multiple models for different situations, rather than relying on a single model as before. "This multifaceted approach combines set theory, human-aware safety constraints, human motion prediction and feedback control for safe human-robot interaction," said Carnegie Mellon University's Zackory Erickson.

The research is still in the early stages, but the ideas could be used areas other than just dressing. "This research could potentially be applied to a wide variety of assistive robotics scenarios, towards the ultimate goal of enabling robots to provide safer physical assistance to people with disabilities," Erickson said.