BitsRobocon / AutonomousDrone

Autonomous Drone for Indoor Navigation and Parcel Delivery
8 stars 5 forks source link

Simulation for SLAM and faking sensor data #6

Open TheGodOfWar007 opened 4 years ago

TheGodOfWar007 commented 4 years ago

It is important to work on a simulated environment rather than going for on-field testing directly to avoid any unwanted damage to property especially when working with aerial robots. The challenges are:

Div12345 commented 4 years ago

Data Set from the Intel Cameras - For simulation of SLAM without the hardware setup - Data Set

Information is stored in a Rosbag file (The information on the exact content of the Rosbag file is given here)

This allows files recorded by the SDK to be replayed using any ROS tools \ application.

Record and Playback Example

The Playback device is an implementation of device interface which reads from a file to simulate a real device. Playback device holds playback sensors which simulate real sensors.

TheGodOfWar007 commented 4 years ago

The most suitable and well documented simulator with a rich library is preferred. Due to the vast majority of online resources and excellent support for ROS, Gazebo is chosen for the simulation part. The suggested sequence to learn is:

  1. Go through all the ROS beginner tutorials.

  2. Then familiarise yourself with the gazebo environment from the beginner tutorials of gazebo. Follow beginner tutorial Understanding the GUI & Model Editor.

  3. Then you can directly skip to the 3rd bonus tutorial for Logging & Playback because the building and model editor are needed if you need to edit or create structures of your own, however pre-existing models are available and hence it is preferred to skip the editors for now.

Faking and Simulating sensor data:

  1. Go through the Velodyne Lidar tutorial (Tutorial 1 on the intermediate tutorials page).

  2. Skip the tutorial 2 for now but tutorials 3 to 6 (intermediate) are of high importance as here you will learn to integrate ROS with gazebo.

Gazebo model for Intel Realsense cameras.

Which combination of ROS/Gazebo versions to use?

  1. Finally go through all the tutorials given in the Connect to ROS tutorials

  2. Special emphasis to ROS Depth Camera Integration tutorial.

An alternative for learning Gazebo and SLAM is the Udemy course - ROS for beginners II: Localization, Navigation and SLAM which provides an excellent understanding into the workings of Robot Navigation using ROS and Gazebo.

TheGodOfWar007 commented 4 years ago

To model the dynamics of the drone in gazebo, it is suggested to use the existing models prepared by the px4 team. Refer to their official site at the following link: Gazebo Simulation

nepython commented 4 years ago

I did a bit of background check regarding gazebo model for realsense T265. Quoting from Is there a planned T265 Gazebo plugin?:

We currently do not have plans for a Gazebo plugin.

But I was able to find an alternative third party plugin in the same issue which we can use. Note: I haven't tried the plugin yet but am noting it for future references :smile:.

TheGodOfWar007 commented 3 years ago

The final Gazebo simulation model for the Quadcopter has been completed. For the IMU and depth camera we are using models from the default PX4-SITL_gazebo package. The specific model number for the IMU is unknown, while for the depth camera a Kinect RGBD camera model is used. For the LiDAR a Velodyne VLP 16 model is used. The set of sensors which will be deployed on the actual Quadcopter will be a different one though. The following table lists the components selected.

Sensor Model used in Simulation Simulation Model Source Model used in Actual Quadcopter (Vendor Links)
IMU Model Unknown IMU Model Vectornav VN-100
Depth Camera Xbox Kinect / Realsense Kinect / Realsense Stereolabs Zed-2
LiDAR Velodyne Puck-16 VLP-16 Velodyne Puck-16

The latest version of the PX4 Firmware included support for realsense camera through the official librealsense plugin by intel. The mesh files are included by PX4. It is hereby suggested to replace the Kinect by the Realsense depth camera model for further testing. Although, it doesn't really make much of a difference but the librealsense plugin has a detailed and articulate documentation. The official team inventory has a set of Realsense tracking and depth camera therefore the algorithms developed can be tested directly in the real world model before transferring to Zed-2.

TheGodOfWar007 commented 3 years ago

The world file used for initial simulations was the willow-garage office model listed in the official osrf/gazebo_models repository. The walls of the model are very small in scale compared to the quadcopter model, as a result the world file was replaced by @NidheeshJain with a custom designed world model included in this repo (check building1.world).