carla-simulator / carla

Open-source simulator for autonomous driving research.
http://carla.org
MIT License
11.57k stars 3.73k forks source link

GPU-accelerated LIDAR based on z-buffer #779

Closed marcgpuig closed 4 years ago

marcgpuig commented 6 years ago

A new sensor!

The plan is to create a new LIDAR based on the depth render using cameras.

Why?

johnzjq commented 5 years ago

The present Lidar is too slow for online simulation, 4fps in a 2700x 8 core cpu. Rending four images and extracting depth from them are not that difficult, but they are crucial for online simulation tasks.

bigsheep2012 commented 5 years ago

Hello Carla team @marcgpuig @nsubiron @johnzjq, Thanks for releasing 0.9.3. Just want to confirm, according to the change log, the new lidar (for generating perfect geometry point) is not implemented in 0.9.3, right?

If not implemented, would it be implemented in 0.9.4?

Thanks.

JoPaas commented 5 years ago

Hello @marcgpuig is the implementation of this feature still planned? Thank you for your good work!

analog-cbarber commented 5 years ago

You can already use the existing depth camera to mimic Lidar if you want. Just create a 3 or more depth cameras covering the full 360 view. That will give you a very accurate 360 degree pointcloud. If you want to mimic a Velodyne pattern, you would then sample points along the Velodyne scan patterns (probably better to do this on the depth maps rather than the pointcloud).

nsubiron commented 5 years ago

We put this task on hold because of the technical difficulties simulating a rotating Lidar with a depth camera, but IMO we should still add the progress here at some point as it already gives you the 3D points from a depth image (like a Velarray sort of sensor maybe).

analog-cbarber commented 5 years ago

Given that in 0.9 you can move your sensors dynamically, I imagine you could just have a single depth camera that you rotate from the client side, although you would probably have to use synchronous mode to get smooth results.

JoPaas commented 5 years ago

Thanks for all the input, I will take a Look at the depth Image solution. The rotation ist a secondary Feature for me so it should work. @nsubiron: adding this would be great!

kimiya66 commented 5 years ago

how can I find the detected points with Lidar(detected vehicles)? Is this raw_data gives me location of detected vehicles? It is a buffer now pointing to memory I think. I do not have any idea how can I get information from that? i,e. how can I get the location of detected points or distance to them? how can I open this raw_data buffer? some one please helping me. I would be thankful.

barbierParis commented 5 years ago

Hey @marcgpuig , Did you manage to implement semantic segmentation for lidars ? And is your implementation the lidar_gpu branch?

FilippoCeffa commented 5 years ago

The Z-buffer approach is a rasterization technique, and as such the quality of the result is dependent on the target buffer resolution. To approach the quality of the ray-casting approach, the resolution should be high enough so that the error introduced by discretization is negligible. How much error is acceptable in this kind of feature? There is a second potential issue: after rasterization is done, the output will be a buffer, not a point cloud. A “crawl” step would be necessary to extract the relevant pixels to a 3D list of points. This might be expensive, especially if the resolution has been set quite high to address the aforementioned problem. I was wondering if it would be possible to use GPU raytracing for this task, which would lead to the same precision as CPU ray-casting, with a much higher performance. Epic introduced RayCasting based on DXR in UE4.22, so it might be worth investigating how easy it would be to hook into the API.

analog-cbarber commented 5 years ago

The z-buffer approach is equivalent to using the current depth sensor. We have been using the depth sensor quite a bit and it is more than accurate enough for this purpose. Real LIDAR sensors have much lower resolution and accuracy then you will get from this approach. Projecting depths to point clouds is not prohibitively expensive.

The main reason for using ray tracing would be to develop more sophisticated LIDAR models that could account for reflections (e.g. off of water on the road).

FilippoCeffa commented 5 years ago

I understand, thanks for the explanation.

cmpute commented 5 years ago

Any progress that can be added? Really need a real shaped point cloud for algorithm testing, rotation simulation can be actually ignored since you can consider the sensor as solid state,as said in https://github.com/carla-simulator/carla/issues/779#issuecomment-476797748

Put this into high priority please! :nerd_face:

ZeKubiki commented 5 years ago

Really looking forward to this sensor implementation. I work for a robotics company and anything approaching a high-fidelity lidar model would be invaluable. Even if it doesn't exactly match the scan patterns of any existing units.

germanros1987 commented 4 years ago

This is being addressed as part of the External Sensor Interface (ESI) subproject, so I am closing this issue. Please, open a new issue if needed.

ZeKubiki commented 4 years ago

What's the External Sensor Interface and is there an open issue for it?

cmpute commented 4 years ago

Same question as above