PIC4SeR / PIC4rl_gym

This is the official repository of the PIC4rl-gym presented in the paper https://ieeexplore.ieee.org/abstract/document/10193996 (Accepted at ICCCR 2023).
https://ieeexplore.ieee.org/abstract/document/10193996
39 stars 5 forks source link
autonomous-navigation deep-reinforcement-learning robotics simulation

Project: PIC4 Reinforcement Learning Gym (PIC4rl_gym)

Owner: "Mauro Martini, Andrea Eirale, Simone Cerrato"

Date: "2021:12"


PIC4 Reinforcement Learning Gym (PIC4rl_gym)

Description of the project

The PIC4rl_gym project is intended to develop a set of ROS2 packages in order to easily train deep reinforcement learning algorithms for autonomous navigation in a Gazebo simulation environment. A great variety of different sensors (cameras, LiDARs, etc) and platforms are available for custom simulations in both indoor and outdoor stages. Official paper of the project: https://ieeexplore.ieee.org/abstract/document/10193996. Please consider citing our research if you find it useful for your work, google scholar reference at the bottom.

The repository is organized as follows:

robot platforms: the PIC4rl-gym is intended to provide a flexible configurable Gazebo simulation for your training. You can use whatever robotic platform that you have in a ROS 2 / Gazebo package. If you would like to start your work with a set of ready-to-go platforms you can download and add to your workspace the repo PIC4rl_gym_Platforms: https://github.com/PIC4SeR/PIC4rl_gym_Platforms.

You should create your models and worlds for the Gazebo simulation and the respective folders. You can download a full set of worlds and models for Gazebo if you want to use our work for your research:

The PIC4rl_gym packages for training agents in simulation:

alt text

User Guide

Main scripts in pic4rl training package:

Config files:

COMMANDS:

After a training, we can plot the reward evolution we need to edit the script pic4rl/utils/plot_reward.py and write down the path to the directory of the training (father_path). Then:

To run the tester, we must modify the param file in pic4rl/config/training_params.yaml. In particular, uncomment the parameter "evaluate" and in "model-dir" write the path to the proper model directory where the checkpoints have been saved. You can copy promising models in the folder pic4rl/models for simplicity. Launch simulation and then the starter with the same terminal commands (colcon-build the workspace if needed).

TO DO In the .bashrc export the gazebo models path:

Tested software versions

We strongly suggest to set up your learning environment in a docker container starting from pre built cuda images. Tested docker images versions

Try to build tf2rl setup.py:

or install manually the packages in setup.py at ~/PIC4rl_gym/training/tf2rl/setup.py

References

@inproceedings{martini2023pic4rl,
  title={Pic4rl-gym: a ros2 modular framework for robots autonomous navigation with deep reinforcement learning},
  author={Martini, Mauro and Eirale, Andrea and Cerrato, Simone and Chiaberge, Marcello},
  booktitle={2023 3rd International Conference on Computer, Control and Robotics (ICCCR)},
  pages={198--202},
  year={2023},
  organization={IEEE}
}