microsoft / AirSim

Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research
https://microsoft.github.io/AirSim/
Other
16.07k stars 4.48k forks source link

Depth camera-based lidar: A proposal #3272

Open tharindurmt opened 3 years ago

tharindurmt commented 3 years ago

What feature are you suggesting?

A depth camera-based lidar sensor

Overview:

I have been using standard AirSim lidar sensor for a while now and when it comes to simulating lidars with high point acquisition speeds (such as Velodyne VLP-16 which supports up to 300000 points per second), it's rather slow due to the fact that the ray casting happens in the CPU. As such, I came up with an algorithm to use the depth camera (DepthPlanner and/or DepthPerspective) of AirSim and to create a lidar-equivalent point cloud.

The idea is to start with a depth camera located at the same position and orientation where the lidar sensor would be located. Then, at each tick, dynamically set the image width and rotation of the camera. Here, the vertical FoV will be fixed (as in the case of a lidar sensor - for example 30 degrees for Velodyne VLP-16) and the image height will be equal to the number of channels in the lidar sensor. The obtained depth image is then transformed into a point cloud. These partial point clouds are then stitched to create the final lidar scan.

Algorithm:

Nature of Request:

What do you think of this proposal and how can this be improved for accuracy and speed?

Why would this feature be useful?

This would be highly beneficial in cases where a lidar sensor with high point acquisition speeds are to be simulated.

stale[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had activity from the community in the last year. It will be closed if no further activity occurs within 20 days.