Simulation verification and physical deployment of robot reinforcement learning algorithms, suitable for quadruped robots, wheeled robots, and humanoid robots. "sar" stands for "simulation and real"
Clone the code
git clone https://github.com/fan-ziqi/rl_sar.git
This project relies on ROS Noetic (Ubuntu 20.04)
After installing ROS, install the dependency library
sudo apt install ros-noetic-teleop-twist-keyboard ros-noetic-controller-interface ros-noetic-gazebo-ros-control ros-noetic-joint-state-controller ros-noetic-effort-controllers ros-noetic-joint-trajectory-controller
Download and deploy libtorch
at any location
cd /path/to/your/torchlib
wget https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.0.1%2Bcpu.zip
unzip libtorch-cxx11-abi-shared-with-deps-2.0.1+cpu.zip -d ./
echo 'export Torch_DIR=/path/to/your/torchlib' >> ~/.bashrc
Install yaml-cpp
git clone https://github.com/jbeder/yaml-cpp.git
cd yaml-cpp && mkdir build && cd build
cmake -DYAML_BUILD_SHARED_LIBS=on .. && make
sudo make install
sudo ldconfig
Install lcm
git clone https://github.com/lcm-proj/lcm.git
cd lcm && mkdir build && cd build
cmake .. && make
sudo make install
sudo ldconfig
Customize the following two functions in your code to adapt to different models:
torch::Tensor forward() override;
torch::Tensor compute_observation() override;
Then compile in the root directory
cd ..
catkin build
Before running, copy the trained pt model file to rl_sar/src/rl_sar/models/YOUR_ROBOT_NAME
, and configure the parameters in config.yaml
.
Open a terminal, launch the gazebo simulation environment
source devel/setup.bash
roslaunch rl_sar gazebo_<ROBOT>.launch
Open a new terminal, launch the control program
source devel/setup.bash
(for cpp version) rosrun rl_sar rl_sim
(for python version) rosrun rl_sar rl_sim.py
Where \<ROBOT> can be a1
or gr1t1
or gr1t2
.
Control:
Unitree A1 can be connected using both wireless and wired methods:
Open a new terminal and start the control program
source devel/setup.bash
rosrun rl_sar rl_real_a1
Press the R2 button on the controller to switch the robot to the default standing position, press R1 to switch to RL control mode, and press L2 in any state to switch to the initial lying position. The left stick controls x-axis up and down, controls yaw left and right, and the right stick controls y-axis left and right.
OR Press 0 on the keyboard to switch the robot to the default standing position, press P to switch to RL control mode, and press 1 in any state to switch to the initial lying position. WS controls x-axis, AD controls yaw, and JL controls y-axis.
In the following, let ROBOT represent the name of your robot.
Please cite the following if you use this code or parts of it:
@software{fan-ziqi2024rl_sar,
author = {fan-ziqi},
title = {{rl_sar: Simulation Verification and Physical Deployment of Robot Reinforcement Learning Algorithm.}},
url = {https://github.com/fan-ziqi/rl_sar},
year = {2024}
}