Transferring policies learned in simulation to the real world is a promising strategy for acquiring robot skills at scale. However, sim-to-real approaches typically rely on manual design and tuning of the task reward function as well as the simulation physics parameters, rendering the process slow and human-labor intensive. In this paper, we investigate using Large Language Models (LLMs) to automate and accelerate sim-to-real design. Our LLM-guided sim-to-real approach requires only the physics simulation for the target task and automatically constructs suitable reward functions and domain randomization distributions to support real-world transfer. We first demonstrate our approach can discover sim-to-real configurations that are competitive with existing human-designed ones on quadruped locomotion and dexterous manipulation tasks. Then, we showcase that our approach is capable of solving novel robot tasks, such as quadruped balancing and walking atop a yoga ball, without iterative manual design.
This repository contains code for DrEureka's reward generation, RAPP, and domain randomization generation pipelines as well as the forward locomotion and globe walking environments. The two environments are modified from Rapid Locomotion and Dribblebot, respectively.
The following instructions will install everything under one Conda environment. We have tested on Ubuntu 20.04.
conda create -n dr_eureka python=3.8
conda activate dr_eureka
pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
tar -xf IsaacGym_Preview_4_Package.tar.gz
cd isaacgym/python
pip install -e .
cd dr_eureka
pip install -e .
cd forward_locomotion
pip install -e .
cd ../globe_walking
pip install -e .
We'll use forward locomotion (forward_locomotion
) as an example. The following steps can also be done for globe walking (globe_walking
).
First, run reward generation (Eureka):
cd ../eureka
python eureka.py env=forward_locomotion
At the end, the final best reward will be saved in forward_locomotion/go1_gym/rewards/eureka_reward.py
and used for subsequent training runs. The Eureka logs will be stored in eureka/outputs/[TIMESTAMP]
, and the run directory of the best-performing policy will be printed to terminal.
Second, copy the run directory and run RAPP:
cd ../dr_eureka
python rapp.py env=forward_locomotion run_path=[YOUR_RUN_DIRECTORY]
This will update the prompt in dr_eureka/prompts/initial_users/forward_locomotion.txt
with the computed RAPP bounds.
Third, run DR generation with the new reward and RAPP bounds:
python dr_eureka.py env=forward_locomotion
The trained policies are ready for deployment, see the section below.
Our deployment infrastructure is based on Walk These Ways. We'll use forward locomotion as an example, though the deployment setup for both environments are essentially the same.
forward_locomotion/go1_gym_deploy/scripts/deploy_policy.py
. Note that you can have multiple policies at once and switch between them.192.168.123.15
).cd forward_locomotion/go1_gym_deploy/scripts
./send_to_unitree.sh
chmod +x installer/install_deployment_code.sh
cd ~/go1_gym/go1_gym_deploy/scripts
sudo ../installer/install_deployment_code.sh
cd ~/go1_gym/go1_gym_deploy/autostart
./start_unitree_sdk.sh
cd ~/go1_gym/go1_gym_deploy/docker
sudo make autostart && sudo docker exec -it foxy_controller bash
cd /home/isaac/go1_gym && rm -r build && python3 setup.py install && cd go1_gym_deploy/scripts && python3 deploy_policy.py
deploy_policy.py
.DrEureka manipulates pre-defined environments by inserting generated reward functions and domain randomization configurations. To do so, we have designed the environment code to be modular and easily configurable. Below, we explain how the components of our code interact with each other, using forward locomotion as an example:
eureka/eureka.py
runs the reward generation process. It uses:
eureka/envs/forward_locomotion.py
. This is a shortened version of the actual environment code to save token usage.eureka/prompts/reward_signatures/forward_locomotion.txt
. This file should contain a simple format for the LLM to follow. It may also contain additional instructions or explanations for the format, if necessary.train_script: scripts/train.py
in eureka/cfg/env/forward_locomotion.yaml
.reward_template_file: go1_gym/rewards/eureka_reward_template.py
and reward_output_file: go1_gym/rewards/eureka_reward.py
in eureka/cfg/env/forward_locomotion.yaml
. Eureka reads the template file's boilerplate code, fills in the reward function, and writes to the output file for use during training.eureka/utils/misc.py
as construct_run_log(stdout_str)
. This function parses the training script's standard output into a dictionary. Alternatively, it can be used to load a file containing metrics saved during training (for example, tensorboard logs).dr_eureka/rapp.py
computes the RAPP bounds. It uses:
play_script: scripts/play.py
in dr_eureka/cfg/env/forward_locomotion.yaml
.dr_template_file: go1_gym/envs/base/legged_robot_config_template.py
and dr_output_file: go1_gym/envs/base/legged_robot_config.py
in dr_eureka/cfg/env/forward_locomotion.yaml
. Like the reward template/output setup, DrEureka fills in the boilerplate code and writes to the output file for use during evaluation.parameter_test_vals
in dr_eureka/rapp.py
.forward_locomotion_success()
in dr_eureka/rapp.py
.dr_eureka/dr_eureka.py
runs the DR generation process. It uses:
dr_eureka/prompts/initial_users/forward_locomotion.txt
. This uses the direct output of dr_eureka/rapp.py
.reward_output_file: go1_gym/rewards/eureka_reward.py
.dr_eureka/cfg/env/forward_locomotion.yaml
.We thank the following open-sourced projects:
This codebase is released under MIT License.
If you find our work useful, please consider citing us!
@inproceedings{ma2024dreureka,
title = {DrEureka: Language Model Guided Sim-To-Real Transfer},
author = {Yecheng Jason Ma and William Liang and Hungju Wang and Sam Wang and Yuke Zhu and Linxi Fan and Osbert Bastani and Dinesh Jayaraman}
year = {2024},
booktitle = {Robotics: Science and Systems (RSS)}
}