Implementation code for our paper "DRL-VO: Learning to Navigate Through Crowded Dynamic Scenes Using Velocity Obstacles"(arXiv) in IEEE Transactions on Robotics (T-RO) 2023. This repository contains our DRL-VO code for training and testing the DRL-VO control policy in its 3D human-robot interaction Gazebo simulator. Video demos can be found at multimedia demonstrations. Here are two GIFs showing our DRL-VO control policy for navigating in the simulation and real world.
Our DRL-VO control policy is a novel learning-based control policy with strong generalizability to new environments that enables a mobile robot to navigate autonomously through spaces filled with both static obstacles and dense crowds of pedestrians. The policy uses a unique combination of input data to generate the desired steering angle and forward velocity: a short history of lidar data, kinematic data about nearby pedestrians, and a sub-goal point. The policy is trained in a reinforcement learning setting using a reward function that contains a novel term based on velocity obstacles to guide the robot to actively avoid pedestrians and move towards the goal. This DRL-VO control policy is tested in a series of 3D simulated experiments with up to 55 pedestrians and an extensive series of hardware experiments using a turtlebot2 robot with a 2D Hokuyo lidar and a ZED stereo camera. In addition, our DRL-VO control policy ranked 1st in the simulated competition and 3rd in the final physical competition of the ICRA 2022 BARN Challenge, which is tested in highly constrained static environments using a Jackal robot. The deployment code for ICRA 2022 BARN Challenge can be found in "nav-competition-icra2022-drl-vo".
This package requires these packages:
We provide two ways to install our DRL-VO navigation packages on Ubuntu 20.04: 1) standalone install them on your PC; 2) use a pre-created singularity container directly (no need to configure the environment).
pip install torch==1.7.1+cu110 -f https://download.pytorch.org/whl/torch_stable.html
pip install gym==0.18.0 pandas==1.2.1
pip install stable-baselines3==1.1.0
pip install tensorboard psutil cloudpickle
sudo apt-get install ros-noetic-move-base*
sudo apt-get install ros-noetic-map-server*
sudo apt-get install ros-noetic-amcl*
sudo apt-get install ros-noetic-navigation*
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src
wget https://raw.githubusercontent.com/zzuxzt/turtlebot2_noetic_packages/master/turtlebot2_noetic_install.sh
sudo sh turtlebot2_noetic_install.sh
cd ~/catkin_ws/src
git clone https://github.com/TempleRAIL/robot_gazebo.git
git clone https://github.com/TempleRAIL/pedsim_ros_with_gazebo.git
git clone https://github.com/TempleRAIL/drl_vo_nav.git
cd ..
catkin_make
source ~/catkin_ws/devel/setup.sh
install singularity software:
cd ~
wget https://github.com/sylabs/singularity/releases/download/v3.9.7/singularity-ce_3.9.7-bionic_amd64.deb
sudo apt install ./singularity-ce_3.9.7-bionic_amd64.deb
download pre-created "drl_vo_container.sif" to the home directory.
install DRL-VO ROS navigation packages:
cd ~
singularity shell --nv drl_vo_container.sif
source /etc/.bashrc
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src
git clone https://github.com/TempleRAIL/robot_gazebo.git
git clone https://github.com/TempleRAIL/pedsim_ros_with_gazebo.git
git clone https://github.com/TempleRAIL/drl_vo_nav.git
cd ..
catkin_make
source ~/catkin_ws/devel/setup.sh
ctrl + D to exit the singularity container.
roscd drl_vo_nav
cd ..
sh run_drl_vo_policy_training_desktop.sh ~/drl_vo_runs
roscd drl_vo_nav
cd ..
sh run_drl_vo_policy_training_server.sh ~/drl_vo_runs
roscd drl_vo_nav
cd ..
sh run_drl_vo_navigation_demo.sh
You can then use the "2D Nav Goal" button on Rviz to set a random goal for the robot, as shown below:
cd ~
singularity shell --nv drl_vo_container.sif
source /etc/.bashrc
source ~/catkin_ws/devel/setup.sh
roscd drl_vo_nav
cd ..
sh run_drl_vo_policy_training_desktop.sh ~/drl_vo_runs
cd ~
singularity shell --nv drl_vo_container.sif
source /etc/.bashrc
source ~/catkin_ws/devel/setup.sh
roscd drl_vo_nav
cd ..
sh run_drl_vo_policy_training_server.sh ~/drl_vo_runs
cd ~
singularity shell --nv drl_vo_container.sif
source /etc/.bashrc
source ~/catkin_ws/devel/setup.sh
roscd drl_vo_nav
cd ..
sh run_drl_vo_navigation_demo.sh
You can then use the "2D Nav Goal" button on Rviz to set a random goal for the robot, as shown below:
roscd drl_vo_nav
cd ..
git checkout -b deploy
cd ../..
catkin_make
source ~/catkin_ws/devel/setup.sh
Please modify the following configuration in the drl_vo_nav.launch according to your robot and environment configuration:
<!-- Map -->
<arg name="map_file" default="$(find drl_vo_nav)/maps/coe_full_lobby/coe_full_lobby2.yaml"/>
<arg name="model_file" default="$(find drl_vo_nav)/src/model/drl_vo.zip"/>
<arg name="rviz" default="false"/>
<!-- Subscriber topics -->
<arg name="scan_topic" default="scan"/> <!-- sensor_msgs::LaserScan -->
<arg name="ped_topic" default="zed_node/obj_det/objects"/> <!-- zed_interfaces::object_stamped -->
<arg name="vel_topic" default="jackal_velocity_controller/cmd_vel"/> <!-- geometry_msgs::Twist -->
<arg name="odom_topic" default="odometry/filtered" /> <!-- nav_msgs::Odometry -->
<!-- Publisher topics -->
<arg name="smooth_cmd_vel_topic" default="cmd_vel"/> <!-- robot control command: geometry_msgs::Twist -->
<!-- AMCL initial pose -->
<arg name="initial_pose_x" default="0.0"/>
<arg name="initial_pose_y" default="0.0"/>
<arg name="initial_pose_a" default="0.0"/>
<!-- TF frames -->
<arg name="base_frame_id" default="base_link"/>
<arg name="global_frame_id" default="map"/>
<arg name="odom_frame_id" default="odom"/>
You can then use roslaunch drl_vo to navigate:
roslaunch drl_vo_nav drl_vo_nav.launch
@article{xie2023drl,
author={Xie, Zhanteng and Dames, Philip},
journal={IEEE Transactions on Robotics},
title={{DRL-VO}: Learning to Navigate Through Crowded Dynamic Scenes Using Velocity Obstacles},
year={2023},
volume={39},
number={4},
pages={2700-2719},
doi={10.1109/TRO.2023.3257549}}
@article{xie2023drl,
title={{DRL-VO}: Learning to Navigate Through Crowded Dynamic Scenes Using Velocity Obstacles},
author={Xie, Zhanteng and Dames, Philip},
journal={arXiv preprint arXiv:2301.06512},
year={2023}
}
@inproceedings{xie2021towards,
title={Towards safe navigation through crowded dynamic environments},
author={Xie, Zhanteng and Xin, Pujie and Dames, Philip},
booktitle={2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
pages={4934--4940},
year={2021},
organization={IEEE}
}