PRBonn / SHINE_mapping

🌟 SHINE-Mapping: Large-Scale 3D Mapping Using Sparse Hierarchical Implicit Neural Representations (ICRA 2023)
MIT License
448 stars 32 forks source link

Noise results testing on the nuScenes dataset #26

Closed dachengxiaocheng closed 1 year ago

dachengxiaocheng commented 1 year ago

Hey, thanks for releasing the amazing work!

The method works very well on the KITTI dataset with 64-line lidar. I made a test on the nuScenes dataset with 32-line lidar. I get a very noisy result including a lot of holes on the ground. The 3D point on nuScenes dataset is more sparse than that of the KITTI dataset.

In order to fill the holes in the 3D surface, I use mc_res_m=0.1, and only sample the points on the "close-to-surface uniform sampling".

Any idea to make it works on the nuScenes dataset? Thank you so much.

YuePanEdward commented 1 year ago

Thanks for your interest in our work. For sparse datasets such as nuScenes, to have higher completeness, it's better to set tree_level_feat larger (for example = 3) and set mc_vis_level larger (for example = 2). In this case, please also turn off mc_with_octree.

Here's the result I achieved on Nuscenes 0001 (containing 40 LiDAR frames) using the kitti_batch.yaml with the modifications mentioned above after 10k iterations.

Screenshot from 2023-08-05 12-34-30

It would be great if you can tell me the sequence number corresponding to the image you posted so that we can also test it.

dachengxiaocheng commented 1 year ago

Hey, thanks so much for replying very soon!

I test it using the hyperparameter as you suggested. It seems there are no too many differences.

I made a quick test on scene-0061 on the nuScenes-mini dataset. Could you have a quick test on this scene?

Thank you.

YuePanEdward commented 1 year ago

Screenshot from 2023-08-06 13-06-55 Hi, here's the result I got on 0061 with all the 382 LiDAR frames and the parameters mentioned above (after 10k iterations). I've also tried to use only 38 frames (1 from every 10 frames) for the mapping, the results are shown below (still seem to be okayish but already with some holes): Screenshot from 2023-08-06 13-20-37

But if you are using even fewer frames, then it could be too sparse for the mapping.

dachengxiaocheng commented 1 year ago

Thanks for sharing the results on Scene-0061. It looks like your result is denser and has fewer holes in the ground.

I use the new code in your GitHub, and run the script "shine_batch.py" with "kitti_batch.yaml". I set the tree_level_feat=3, mc_vis_level=2 and mc_with_octree=False. I only modify the lidar_dataset.py to make it can load the nuScenes dataset. Besides these, nothing is changed. However, the result looks very sparser. Is there anything I don't run the code in the right way?

Do you change the hyper-parameters of "skimage.measure.marching_cubes()"? Thank you so much!

YuePanEdward commented 1 year ago

I forgot to mention that I use mc_res_m: 0.2 for a faster mesh reconstruction. But this would not make much difference in my opinion. There's no other modification from the latest code. SHINE Mapping will also output a downsampled merged point cloud in the map folder under the results path. Is it possible for you to take a screenshot of the merged point cloud in some visualizer? Besides, I wonder how many frames of the point cloud you are using.

dachengxiaocheng commented 1 year ago

Hey, I use the point cloud from "sample_data" of nuScenes-mini dataset. For Scene-0061, there are 39 frames of lidar points and I use all of them. Here is the merged point cloud:

  1. Which surface reconstruction function do you use, "mesher.recon_octree_mesh" or "mesher.recon_bbx_mesh"? I just use the default one "mesher.recon_bbx_mesh".

  2. Do you pre-process the lidar point in the nuScenes dataset? I just use "nuscenes-devkit" to load the original 39 frames of point cloud.

PC00

dachengxiaocheng commented 1 year ago

Hello, thanks for your help.

I find a bug in my data loading. Now I could get the denser reconstruction results similar to what you provided.