pmj110119 / RenderOcc

[ICRA 2024] RenderOcc: Vision-Centric 3D Occupancy Prediction with 2D Rendering Supervision. (Early version: UniOcc)
447 stars 25 forks source link

How about generate rays directly from lidarseg #7

Closed secret104278 closed 1 year ago

secret104278 commented 1 year ago

Hi teams, Currently, the ground truth generation pipeline is to project lidar data to each camera and generate 2D depth and segmentation input as Nerf rendering target. However, I'm curious that have you try to use lidar as the nerf ray directly (for example, ray_o is at lidar sensor position), and and experiment on which method will provider better result?

pmj110119 commented 1 year ago

Yes, it is possible to directly use LiDAR points to generate rays. This approach avoids the numerical errors associated with 2D projection and can yield slight performance improvements (we compared this in early experiments).

As the main focus of the paper is to train models using 2D image labels, which are not provided by NuScenes, we had to generate 2D image labels by projecting lidarseg. If your project does not have this constraint, you can certainly use LiDAR points to generate rays directly.

secret104278 commented 1 year ago

Hi @pmj110119, thanks for your insight. Besides, it is possible for you to open source the lidar ray version of your early experiments?

pmj110119 commented 1 year ago

I'm sorry, that experiment is too early-stage and significantly different from the current released version. Implementing 'lidar ray' directly in the current version's code is a more cost-effective way.

If I have the time in the future, I may attempt to add this option, but it's not on the immediate roadmap. (If you're willing, I would greatly welcome you to submit a pull request.)