tangtaogo / lidar-nerf

LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields
https://tangtaogo.github.io/lidar-nerf-website/
MIT License
126 stars 9 forks source link

Attempting to reproduce a paper #2

Closed AI-student-wl closed 1 year ago

AI-student-wl commented 1 year ago

Hello, this is an extremely excellent piece of work, and I appreciate your efforts in publishing it. I am trying to reproduce the results of nerf-mvl. The code runs smoothly according to the instructions, but I don't understand the visualization results. I hope I can get your help. I want to express my gratitude once again for your efforts in this field

image This is the visualization result of Tensorboard's training image This is in the log/trial Nerf Nerf Ply files saved in the mvl/meshes file image This is in the log/trial Nerf Nerf The results in the mvl/validation file cannot be seen in red or black. The following image shows the visualization of the npy file using the Open3D library, which appears to be a partial point cloud of the object image

tangtaogo commented 1 year ago

Hello, thank you very much for your recognition and interest in our work. I apologize for the delayed response as I have been limited in my availability recently.

Firstly, I can explain about the training of nerf-mvl. Our lidar data is modeled according to lidar parameters, so most of the images in the validation folder have a resolution of 256x1800. However, nerf-mvl is an object-centric dataset, so in the images, only a small portion represents an object from a certain angle. It is difficult to discern the object from the images alone. However, we provide 3D bounding boxes in the world coordinate system for each object (data/nerf_mvl/dataset_bbox_7k.npy, generated here: https://github.com/tangtaogo/lidar-nerf/blob/8083a1d74eef6e7dc91c65ed7aedc2b45a39f76c/preprocess/generate_train_rangeview.py#L47). ↗.) With these 3D bounding boxes, you can crop the corresponding 3D object points (as shown in the code: https://github.com/tangtaogo/lidar-nerf/blob/8083a1d74eef6e7dc91c65ed7aedc2b45a39f76c/lidarnerf/nerf/utils.py#L1124) ↗) or project them onto a 2D view to obtain a 2D mask (for example, in the code: https://github.com/tangtaogo/lidar-nerf/blob/8083a1d74eef6e7dc91c65ed7aedc2b45a39f76c/lidarnerf/nerf/utils.py#L902, ↗ where gt_drop is also generated based on the 3D bounding box: https://github.com/tangtaogo/lidar-nerf/blob/8083a1d74eef6e7dc91c65ed7aedc2b45a39f76c/preprocess/generate_train_rangeview.py#L73). ↗.)

Therefore, for image visualization, you can refer to the above-mentioned code and crop the corresponding positions according to your needs. Regarding the color of the images, you can adjust the parameters of cv2.applyColorMap to modify it.

Secondly, there seems to be an issue with the point cloud visualization based on the information you provided. I haven't come across this problem during my local testing. Could you please let me know the command you used to run it and which category of nerf-mvl you were running? We can work together to resolve it.