Ericonaldo / visual_wholebody

Train a loco-manipulation dog with RL
https://wholebody-b1.github.io/
Other
136 stars 13 forks source link

Visual Whole-Body for Loco-Manipulation

https://wholebody-b1.github.io/

Related to paper <Visual Whole-Body Control for Legged Loco-Manipulation>

Model learning reference

Low-level learning curves: wandb

High-level learning curves: wandb

Low-level model weights: https://drive.google.com/file/d/1KIfKu77QkrwbK-YllSWclqb6vJknGgjv/view?usp=sharing

Set up the environment

conda create -n b1z1 python=3.8 # isaacgym requires python <=3.8
conda activate b1z1

git clone git@github.com:Ericonaldo/visual_whole_body.git

cd visual_whole_body

pip install torch torchvision torchaudio

cd third_party/isaacgym/python && pip install -e .

cd ../..
cd rsl_rl && pip install -e .

cd ..
cd skrl && pip install -e .

cd ../..
cd low-level && pip install -e .

pip install numpy pydelatin tqdm imageio-ffmpeg opencv-python wandb

Structure

Detailed code structures can be found in these directories.

How to work (roughly)

Acknowledgements (third-party dependencies)

The low-level training also refers a lot to DeepWBC.

Codebase Contributions

Citation

If you find the code base helpful, consider to cite

@article{liu2024visual,
    title={Visual Whole-Body Control for Legged Loco-Manipulation},
    author={Liu, Minghuan and Chen, Zixuan and Cheng, Xuxin and Ji, Yandong and Yang, Ruihan and Wang, Xiaolong},
    journal={arXiv preprint arXiv:2403.16967},
    year={2024}
}