Code accompanying the following papers:
"DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills" \ (https://xbpeng.github.io/projects/DeepMimic/index.html) \
"AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control" \ (https://xbpeng.github.io/projects/AMP/index.html) \
The framework uses reinforcement learning to train a simulated humanoid to imitate a variety of motion skills from mocap data.
sudo apt install libgl1-mesa-dev libx11-dev libxrandr-dev libxi-dev
sudo apt install mesa-utils
sudo apt install clang
sudo apt install cmake
C++:
Bullet 2.88 (https://github.com/bulletphysics/bullet3/releases)
Download Bullet 2.88 from the above link and install using the following commands.
./build_cmake_pybullet_double.sh
cd build_cmake
sudo make install
Eigen (http://www.eigen.tuxfamily.org/index.php?title=Main_Page) (Version : 3.3.7)
mkdir build && cd build
cmake ..
sudo make install
OpenGL >= 3.2
freeglut (http://freeglut.sourceforge.net/) ( Version : 3.0.0 )
cmake .
make
sudo make install
glew (http://glew.sourceforge.net/) ( Version : 2.1.0 )
make
sudo make install
make clean
Misc:
SWIG (http://www.swig.org/) ( Version : 4.0.0 )
./configure --without-pcre
make
sudo make install
MPI
sudo apt install libopenmpi-dev
Python:
pip install PyOpenGL PyOpenGL_accelerate
pip install tensorflow
pip install mpi4py
The simulated environments are written in C++, and the python wrapper is built using SWIG.
Note that MPI must be installed before MPI4Py. When building Bullet, be sure to disable double precision with the build flag USE_DOUBLE_PRECISION=OFF
.
The wrapper is built using DeepMimicCore.sln
.
Select the x64
configuration from the configuration manager.
Under the project properties for DeepMimicCore
modify Additional Include Directories
to specify
Modify Additional Library Directories
to specify
Build DeepMimicCore
project with the Release_Swig
configuration and this should
generate DeepMimicCore.py
in DeepMimicCore/
.
Modify the Makefile
in DeepMimicCore/
by specifying the following,
EIGEN_DIR
: Eigen include directoryBULLET_INC_DIR
: Bullet source directoryPYTHON_INC
: python include directoryPYTHON_LIB
: python lib directoryBuild wrapper,
make python
This should generate DeepMimicCore.py
in DeepMimicCore/
Once the python wrapper has been built, training is done entirely in python using Tensorflow.
DeepMimic.py
runs the visualizer used to view the simulation. Training is done with mpi_run.py
,
which uses MPI to parallelize training across multiple processes.
DeepMimic.py
is run by specifying an argument file that provides the configurations for a scene.
For example,
python DeepMimic.py --arg_file args/run_humanoid3d_spinkick_args.txt
will run a pre-trained policy for a spinkick. Similarly,
python DeepMimic.py --arg_file args/play_motion_humanoid3d_args.txt
will load and play a mocap clip. To run a pre-trained policy for a simulated dog, use this command
python DeepMimic.py --arg_file args/run_dog3d_pace_args.txt
To train a policy, use mpi_run.py
by specifying an argument file and the number of worker processes.
For example,
python mpi_run.py --arg_file args/train_humanoid3d_spinkick_args.txt --num_workers 16
will train a policy to perform a spinkick using 16 workers. As training progresses, it will regularly
print out statistics and log them to output/
along with a .ckpt
of the latest policy.
It typically takes about 60 millions samples to train one policy, which can take a day
when training with 16 workers. 16 workers is likely the max number of workers that the
framework can support, and it can get overwhelmed if too many workers are used.
A number of argument files are already provided in args/
for the different skills.
train_[something]_args.txt
files are setup for mpi_run.py
to train a policy, and
run_[something]_args.txt
files are setup for DeepMimic.py
to run one of the pretrained policies.
To run your own policies, take one of the run_[something]_args.txt
files and specify
the policy you want to run with --model_file
. Make sure that the reference motion --motion_file
corresponds to the motion that your policy was trained for, otherwise the policy will not run properly.
Similarly, to train a policy using amp, run with the corresponding argument files:
python mpi_run.py --arg_file args/train_amp_target_humanoid3d_locomotion_args.txt --num_workers 16
Pretrained AMP models can be evaluated using:
python DeepMimic.py --arg_file args/run_amp_target_humanoid3d_locomotion_args.txt
Mocap clips are located in data/motions/
. To play a clip, first modify
args/play_motion_humanoid3d_args.txt
and specify the file to play with
--motion_file
, then run
python DeepMimic.py --arg_file args/play_motion_humanoid3d_args.txt
The motion files follow the JSON format. The "Loop"
field specifies whether or not the motion is cyclic.
"wrap"
specifies a cyclic motion that will wrap back to the start at the end, while "none"
specifies an
acyclic motion that will stop once it reaches the end of the motion. Each vector in the "Frames"
list
specifies a keyframe in the motion. Each frame has the following format:
[
duration of frame in seconds (1D),
root position (3D),
root rotation (4D),
chest rotation (4D),
neck rotation (4D),
right hip rotation (4D),
right knee rotation (1D),
right ankle rotation (4D),
right shoulder rotation (4D),
right elbow rotation (1D),
left hip rotation (4D),
left knee rotation (1D),
left ankle rotation (4D),
left shoulder rotation (4D),
left elbow rotation (1D)
]
Positions are specified in meters, 3D rotations for spherical joints are specified as quaternions (w, x, y ,z)
,
and 1D rotations for revolute joints (e.g. knees and elbows) are represented with a scalar rotation in radians. The root
positions and rotations are in world coordinates, but all other joint rotations are in the joint's local coordinates.
To use your own motion clip, convert it to a similar style JSON file.
ImportError: libGLEW.so.2.1: cannot open shared object file: No such file or directory search for libGLEW.so.2.1 and use the following command accordingly ln /path/to/libGLEW.so.2.1 /usr/lib/x86----/libGLEW.so.2.1 ln /path/to/libGLEW.so.2.1.0 /usr/lib/x86----/libGLEW.so.2.1.0
ImportError: libBulletDynamics.so.2.88: cannot open shared object file: No such file or directory export LD_LIBRARY_PATH=/usr/local/lib/ ( can be temporary when run in terminal) (libBullet file are present in that path - gets installed in that path after the command sudo make install while installing Bullet)