Deep Learning Helps Robot Learn to Walk the Way Humans Do

Darwin the robot wobbles when he learns to walk. Sometimes he falls. But unlike most robots, he responds to his mistakes — just like people do — and adjusts his technique on the fly.

His baby steps could lead to a new generation of autonomous robots that adapt to changing environments and new situations without a human reprogramming them. These robots could tackle dangerous tasks such as handling rescue efforts or cleaning up disaster areas. Or, they could become assistants that help out around the house or ferry a package across town.

“An autonomous robot would be able to take a high-level goal and figure out how to achieve it,” said Igor Mordatch, a postdoctoral fellow at the University of California, Berkeley, who is leading the Darwin research project. “That would be very, very powerful.”

How the Robot Learned

Darwin gets his smarts from two GPU-accelerated deep learning networks. The “learning” in deep learning happens in many layers of simulated neural networks, algorithms that are modeled on the human brain. These neural networks learn like we do, strengthening or weakening connections between neurons in response to feedback.

Darwin learned in two stages: in simulation and in the real world. As a foundation, Mordatch created a simulated model of Darwin’s physical presence (height, girth, etc.) and specified some basic properties of the environment (carpet or rough terrain, for instance).

What he didn’t do was teach the robot to walk.

In the simulation, the robot took what he did know to figure out the right sequence of movements, such as how to position his legs to walk to a certain location or how to twist his torso to stand from a prone position.

Deep learning robot learns to stand.
Without being taught, the deep learning robot rises from the floor to a standing position. Image courtesy of the University of Washington.

On-the-Fly Learning

In the second stage, Darwin had to apply what he learned in simulation to stand, balance and reach in the physical world. Things got tricky here. He might need to instantly determine how to keep his balance when he stretched out an arm. Or he might fall if he twisted an ankle too much and then have to get back up again.

“As much as we’ve tried to make the simulated world accurate, it’s not the same thing as the real world,” said Mordatch. “That’s why we needed on-the-fly learning.”

GPUs were essential for learning of this complexity.

“If we did the training on CPU, it would have required a week. With a GPU, it ended up taking three hours,” said Mordatch, who is now using TITAN X GPUs.

Deep Learning and the Brain

Mordatch works in the lab of Pieter Abbeel, an associate professor of robotics at UC Berkeley. Mordatch’s research at Berkeley builds on work he did at the University of Washington with professors Emo Todorov and Zoran Popović. While Mordatch continues his research on Darwin, he’s also applying deep learning to create a simulated model of the human body. He’s teamed with researchers from Stanford University to understand how the human brain creates movement.

This knowledge could one day help doctors better predict how certain surgical procedures are likely to affect a patient’s movement.

Similar Stories

  • http://jessejarvis.com/ Jesse Jarvis

    Incredible. I may be too optimistic but I really can’t wait for a humanoid AI assistant.

  • c4p0ne

    Battlestar Galactica, here we come.

  • alanseli

    Greate article and great achivements with the robot Darwin.
    refered to it in an article on my blog.

    http://www.webshopgurus.com/2016/01/19/robots-learning-to-walk-a-glimpse-of-the-future-with-artificial-intelligence/

  • randcraw

    Why only humanoid? Are you a robot racist? 🙂

  • Emo Todorov

    This is indeed great work, although I am somewhat biased because it was done in our lab at the University of Washington, and not at UC Berkeley as the article says 🙂 The first author is now a postdoc at UC Berkeley which may explain the confusion. If anyone is interested in the technical details, the full text of the paper can be found here: http://homes.cs.washington.edu/~todorov/papers/MordatchICRA15.pdf
    It is published in the IEEE/RAS International Conference on Intelligent Robots and Systems 2015.

  • Jamie Beckett

    Thanks for your feedback. I’ve added information about UW’s role in this research and made a few other adjustments.