This repository contains the official implementation of paper Visual Navigation in Real-World Indoor Environments Using End-to-End Deep Reinforcement Learning.
Before getting started, ensure, that you have Python 3.6+ ready. We recommend activating a new virtual environment for the repository:
python -m venv robot-visual-navigation-env
source robot-visual-navigation-env/bin/activate
Start by cloning this repository and installing the dependencies:
git clone https://github.com/jkulhanek/robot-visual-navigation.git
cd robot-visual-navigation
pip install -r requirements.txt
cd python
For DMHouse package, we recommend starting with Ubuntu 18+ and installing dependencies as follows:
apt-get install libsdl2-dev libosmesa6-dev gettext g++ unzip zip curl gnupg libstdc++6
You can download the pre-trained models from: https://data.ciirc.cvut.cz/public/projects/2021RealWorldNavigation/checkpoints/dmhouse-models.tar.gz https://data.ciirc.cvut.cz/public/projects/2021RealWorldNavigation/checkpoints/turtlebot-models.tar.gz
Download the pre-trained models using the following commands:
mkdir -p ~/.cache/robot-visual-navigation/models
# Download DMHouse models
curl -L https://data.ciirc.cvut.cz/public/projects/2021RealWorldNavigation/checkpoints/dmhouse-models.tar.gz | tar -xz -C ~/.cache/robot-visual-navigation/models
# Download real-world dataset models
curl -L https://data.ciirc.cvut.cz/public/projects/2021RealWorldNavigation/checkpoints/turtlebot-models.tar.gz | tar -xz -C ~/.cache/robot-visual-navigation/models
# Download real-world dataset
mkdir -p ~/.cache/robot-visual-navigation/datasets
curl -L -o ~/.cache/robot-visual-navigation/datasets/turtle_room_compiled.hdf5 https://data.ciirc.cvut.cz/public/projects/2021RealWorldNavigation/datasets/turtle_room_compiled.hdf5
Run the evaluation on the DMHouse simulator to verify that everything is working correctly:
python evaluate_dmhouse.py dmhouse --num-episodes 100
Similarly for the real-world dataset:
python evaluate_turtlebot.py turtlebot --num-episodes 100
Alternatively, you can also use other agents as described in the Training
section.
Start the training by running ./train.py <trainer>
, where trainer
is the experiment you want to run. Available experiments are the following:
dmhouse
: our method (A2CAT-VN) trained with the dmhouse simulatordmhouse-unreal
: UNREAL trained with the dmhouse simulatordmhouse-a2c
: PAAC trained with the dmhouse simulatorturtlebot
: our method (A2CAT-VN) fine-tuned on the real-world datasetturtlebot-unreal
: UNREAL fine-tuned on the real-world datasetturtlebot-a2c
: PAAC fine-tuned on the real-world datasetturtlebot-noprior
: our method (A2CAT-VN) trained on the real-world dataset; the model is trained from scretchturtlebot-unreal-noprior
: UNREAL trained on the real-world dataset; the model is trained from scretchturtlebot-a2c-noprior
: PAAC trained on the real-world dataset; the model is trained from scretchAll model checkpoints are available online:
https://data.ciirc.cvut.cz/public/projects/2021RealWorldNavigation/checkpoints
Please use the following citation:
@article{kulhanek2021visual,
title={Visual navigation in real-world indoor environments using end-to-end deep reinforcement learning},
author={Kulh{\'a}nek, Jon{\'a}{\v{s}} and Derner, Erik and Babu{\v{s}}ka, Robert},
journal={IEEE Robotics and Automation Letters},
volume={6},
number={3},
pages={4345--4352},
year={2021},
publisher={IEEE}
}