New Algorithm Leads Tt Breakdancing, Acrobatic Simulated Characters

Published:

Updated:

A team of researchers from the University of California, Berkeley and the University of British Columbia in Canada has developed an algorithm to re-create natural motions in computer animation.

Traditional computer-simulated motions are seen as clumsy and rhythmless, often failing at mimicking a human’s natural motions.

Disappointed by old techniques, the team was inspired to find a solution.

“The motivation for this work is that we want to develop simulated characters that can perform some very challenging skills while also moving in a natural manner,” said Xue Bin (Jason) Peng, a UC Berkeley graduate student and researcher.

Pieter Abbeel and Sergey Levine, both from the UC Berkeley Department of Electrical Engineering and Computer Sciences, and Michiel van de Panne from the University of British Columbia also contributed to the study.

The researchers used deep reinforcement learning to re-create natural motions in humans. With this technique, the simulated characters can do acrobatics, breakdancing and martial arts, and can even respond to changes in the environment, such as being tripped or dodging projectiles.

The Computer System (DeepMimic)

Traditionally, there have been two techniques used in computer animation.

One requires designing customized controllers for every skill, such as walking, flipping, or running. The results from this method usually look pretty good, Peng said.

The other technique, which uses deep reinforcement learning methods, can simulate many tricks using a single algorithm, but its results often look unnatural.

The researchers’ new technique allows them to get “the best of both worlds,” Peng said in a statement.

The team’s algorithm can simulate many tricks, and could surpass the appearance of traditional hand-controller methods.

“Our method is extremely simple,” said Peng.

“We first collect a single demonstration of a skills from a human (ex. backflip or spin kick),” he continued. “The demonstrations are usually in the form of motion capture clips. We then feed this demonstration to a reinforcement learning algorithm that tries to imitate the motion of the human. The agent imitates the motion by minimizing the tracking error at every timestep, and this simple approach ends up allowing the character to learn some very dynamic and acrobatic skills.”

Peng collected reference data from more than 25 motion-capture clips of backflips, cartwheels, kip-ups, vaults, running, throwing, jumping, and more.

The team then allowed the system, named DeepMimic, to practice each skill for around a month.

The computer practiced all day and night and used trial and error to find the closest match to real human movements.

Because difficult movements, such as the human backflip, require many individual body movements, the researchers set the algorithm to learn various stages of the backflip. It then took all of the stages and stitched them together to create a full motion.

 

Real World Applications

The algorithm could have many applications.

“This method provides a simple way for simulated agents to learn a large repertoire of motor skills from a small amount of data,” said Peng. “The most immediate applications of this work will likely be more realistic and interactive characters for films and games. But in the future, we are interested in possibly using this approach of learning from demonstration to train robots to perform these sorts of dynamic skills.”

With this revolutionary method, the researchers are treading in uncharted waters in regards to deep learning and animation.

“We developed more capable agents that behave in a natural manner,” Peng said in a statement. “If you compare our results to motion-capture recorded from humans, we are getting to the point where it is pretty difficult to distinguish the two, to tell what is simulation and what is real. We’re moving toward a virtual stuntman.”

Much interest has been expressed regarding using this technique in robotics.

Because the method requires a lot of training before an agent can efficiently learn a particular skill, it will be difficult to apply the current method to robotics, said Peng. “But I think the general direction of learning from demonstrations is an extremely promising avenue of research for robotics, and there is a lot of exciting ongoing work that is exploring these approaches.”

FREE 6-month trial

Then, enjoy Amazon Prime at half the price – 50% off!

TUN AI – Your Education Assistant

TUN AI

I’m here to help you with scholarships, college search, online classes, financial aid, choosing majors, college admissions and study tips!

The University Network