https://wholebody-b1.github.io/
Related to paper <Visual Whole-Body Control for Legged Loco-Manipulation>
Low-level learning curves: wandb
High-level learning curves: wandb
Low-level model weights: https://drive.google.com/file/d/1KIfKu77QkrwbK-YllSWclqb6vJknGgjv/view?usp=sharing
conda create -n b1z1 python=3.8 # isaacgym requires python <=3.8
conda activate b1z1
git clone git@github.com:Ericonaldo/visual_whole_body.git
cd visual_whole_body
pip install torch torchvision torchaudio
cd third_party/isaacgym/python && pip install -e .
cd ../..
cd rsl_rl && pip install -e .
cd ..
cd skrl && pip install -e .
cd ../..
cd low-level && pip install -e .
pip install numpy pydelatin tqdm imageio-ffmpeg opencv-python wandb
high-level
: codes and environments related to the visuomotor high-level policy, task-relevant
low-level
: codes and environments related to the general low-level controller for the quadruped and the arm, the only task is to learn to walk while tracking the target ee pose and the robot velocities.
Detailed code structures can be found in these directories.
Train a low-level policy using codes and follow the descriptions in low-level
Put the low-level policy checkpoint into somewhere.
Train the high-level policy using codes and follow the descriptions in high-level
, while assigning the low-level model in the config yaml file.
The low-level training also refers a lot to DeepWBC.
If you find the code base helpful, consider to cite
@article{liu2024visual,
title={Visual Whole-Body Control for Legged Loco-Manipulation},
author={Liu, Minghuan and Chen, Zixuan and Cheng, Xuxin and Ji, Yandong and Yang, Ruihan and Wang, Xiaolong},
journal={arXiv preprint arXiv:2403.16967},
year={2024}
}