π€ Sim-to-Real Virtual Guidance for Robot Navigation
An effective, easy-to-implement, and low-cost modular framework for robot navigation tasks. Two documentations are available on official and nice-look π.
π
This project won the second place in NVIDIA AI at the Edge Challenge.
πΎ Variant
-
Mulitple Device Version
β‘οΈ Features
-
Automatically navigate the robot to a specific goal without any high-cost sensors.
-
Based on a single camera and use deep learning methods.
-
Use Sim-to-Real technique to eliminate the gap between virtual environment and real world.
-
Introduce Virtual guidance to entice the agent to move toward a specific direction.
-
Use Reinforcement learning to avoid obstacles while driving through crowds of people.
π Prerequisites
- Ubuntu 18.04
- gcc5 or higher
- Python 2.7.17 or higher
- Python 3.5 or higher
- Tensorflow 1.12
Note: Both versions of Python required.
π§ How It Works
- Our full architecture is split into four parts: the Perception module, Localization module, Planner module and Control policy module.
- The perception module translates the image into comprehensible segmented chunks
- The Localization module calculates the agentβs position.
- The Planner module generates a path leading to the goal. This path is then communicated to the control policy module via a βvirtual guideβ.
- The Control policy module then deploys deep reinforcement learning to control the agent.
- For more details please refer to the website.
π Documentation
See here.
π¨ Installation
You can find the instruction here
πͺ Usage
Please refer to Manual