I have been using standard AirSim lidar sensor for a while now and when it comes to simulating lidars with high point acquisition speeds (such as Velodyne VLP-16 which supports up to 300000 points per second), it's rather slow due to the fact that the ray casting happens in the CPU. As such, I came up with an algorithm to use the depth camera (DepthPlanner and/or DepthPerspective) of AirSim and to create a lidar-equivalent point cloud.
The idea is to start with a depth camera located at the same position and orientation where the lidar sensor would be located. Then, at each tick, dynamically set the image width and rotation of the camera. Here, the vertical FoV will be fixed (as in the case of a lidar sensor - for example 30 degrees for Velodyne VLP-16) and the image height will be equal to the number of channels in the lidar sensor. The obtained depth image is then transformed into a point cloud. These partial point clouds are then stitched to create the final lidar scan.
Algorithm:
Given lidar frequency f Hz, last laser angle θ_0, number of channels n, vertical FoV V
At each sensor call
Calculate time t since the last partial lidar scan
This issue has been automatically marked as stale because it has not had activity from the community in the last year. It will be closed if no further activity occurs within 20 days.
What feature are you suggesting?
A depth camera-based lidar sensor
Overview:
I have been using standard AirSim lidar sensor for a while now and when it comes to simulating lidars with high point acquisition speeds (such as Velodyne VLP-16 which supports up to 300000 points per second), it's rather slow due to the fact that the ray casting happens in the CPU. As such, I came up with an algorithm to use the depth camera (DepthPlanner and/or DepthPerspective) of AirSim and to create a lidar-equivalent point cloud.
The idea is to start with a depth camera located at the same position and orientation where the lidar sensor would be located. Then, at each tick, dynamically set the image width and rotation of the camera. Here, the vertical FoV will be fixed (as in the case of a lidar sensor - for example 30 degrees for Velodyne VLP-16) and the image height will be equal to the number of channels in the lidar sensor. The obtained depth image is then transformed into a point cloud. These partial point clouds are then stitched to create the final lidar scan.
Algorithm:
Nature of Request:
What do you think of this proposal and how can this be improved for accuracy and speed?
Why would this feature be useful?
This would be highly beneficial in cases where a lidar sensor with high point acquisition speeds are to be simulated.