dadecampo / SafeRLAUV

Safe Reinforcement Learning for Autonomous Underwater Vehicles
MIT License
12 stars 4 forks source link
autonomous-navigation reinforcement-learning robotics-simulation unity zibraai

Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning

This paper has been accepted by the 1st Reinforcement Learning Conference (RLC), August 9–12, 2024. During the conference, the project was awarded one of the "Outstanding Paper Awards". The full paper is available at this link

Authors

# This project aims to create a series of underwater and surface environments in Unity in order to test different Deep Reinforcement Learning approaches to solve the problem of autonomous agents, such as AUVs and surface drones, traversing one of these marine environments. We simulated the water using ZibraAI Liquids Pro and used ML-agents for the agent's training phase.

( a ) AUV overview. ( b ) In this image, you can see how the proximity sensors of the rover are organized. Thanks to these sensors, it is possible to understand the proximity to obstacles and make decisions accordingly

Environment Setup

We have developed and tested this project on Windows 10/11, so we recommend using the same platform. Through the environment settings, it will be possible to test and replicate the training experiments. By purchasing, or requesting a demo of ZibraAI Liquids Pro, you will also be able to modify the objects affected by the plugin within the Unity project.

  1. Download Anaconda for your System.

  2. Install Anaconda

  3. Clone SafeRLAUV repository using Git.

    • git clone https://github.com/dadecampo/SafeRLAUV
  4. Setup environment:

    • conda create --name aquatic_navigation python=3.10.12
    • conda activate aquatic_navigation
    • cd SafeRLAUV\Python
    • pip3 install -r requirements.txt
    • pip3 install -e .

Repository organization

Inside the 'TrainerConfig' folder, you will find XAML files that define hyperparameters and characteristics of the various training phases, so you will find:

AUV's trainers

Surface drone's trainers

Training

This section has been created to allow you to reproduce the training we have conducted. We will provide you with the commands to execute the different phases of our training.

AUV Training: in terms of the reward obtained at the end of the various training sessions, we did not observe any differences; it was possible to achieve convergence with both training methodologies, however, we will see an important improvement in the testing phase.

First of all, let's enter the folder where the training XAML files are located.

cd TrainerConfig

Underwater Environments

Here you can download the Builds!

Curriculum Learning

Curriculum Learning involves a structured approach to training models, where the complexity of the learning tasks is gradually increased over time. Initially, the model is exposed to simpler, more manageable examples before being challenged with progressively more difficult ones. In our approach, we subdivide training into 3 different lessons.

Lesson 1

For the first lesson of curriculum learning, the agent will be placed inside the simplest training cave, where it will have to learn to reach the opposite end of the cave without having to worry about countering the force of the currents, as the water will be disabled.

mlagents-learn .\AUV\trainer_lesson_1.yaml --run-id=LESSON_1 --env=../Builds/NoWater_Env/SafeRLAUV.app --width=512 --height=512

Lesson 2

In the second lesson, our agent will be in a slightly more complex cave and will also need to learn how to move in the presence of water, which will be set at half of the strength of the final currents.

mlagents-learn .\AUV\trainer_lesson_2.yaml --run-id=LESSON_2 --env=../Builds/HalfWater_Env/SafeRLAUV.app --width=512 --height=512 --initialize-from=LESSON_1

Lesson 3

In the third and final lesson, the agent will have to face a training environment that includes all the complexities of a test environment, with the currents set to their maximum strength.

mlagents-learn .\AUV\trainer_lesson_3.yaml --run-id=LESSON_3 --env=../Builds/FullWater_Env/SafeRLAUV.app --width=512 --height=512 --initialize-from=LESSON_2

End-to-End

In End-to-End training, the agent will undergo a typical training process, after which it should be capable of successfully tackling the subsequent testing phases. In this case, the agent will face a complex environment (equivalent to that encountered during the third phase of curriculum learning), including currents at their maximum strength.

mlagents-learn .\AUV\trainer_e2e.yaml --run-id=LESSON_E2E --env=../Builds/FullWater_Env/SafeRLAUV.app --width=512 --height=512

Surface Environments

Unique Lesson

In the only lesson designed for this surface drone, the agent must traverse an area with various obstacles, being careful not to collide with them before reaching the target area located at the opposite end of the scenario. There are two possibilities, safe or standard training.

Safe

mlagents-learn .\OnSurface\trainer_lesson_1_safe.yaml --run-id=LESSON_1 --env=../Builds/OnSurface_Env/SafeRLAUV.app --width=512 --height=512 --time-scale=1

Standard

mlagents-learn .\OnSurface\trainer_lesson_1_standard.yaml --run-id=LESSON_1 --env=../Builds/OnSurface_Env/SafeRLAUV.app --width=512 --height=512 --time-scale=1