PRBonn / 4dNDF

3D LiDAR Mapping in Dynamic Environments using a 4D Implicit Neural Representation (CVPR 2024)
MIT License
133 stars 6 forks source link

Bug in evaluation #1

Closed DeepDuke closed 2 months ago

DeepDuke commented 2 months ago

https://github.com/PRBonn/4dNDF/blob/3d4fa1f9dd970fc3d6f848cf1f6121d53a291e79/eval/eval_newercollege.py#L81-L84

It seems results should come from the computation between sampled_points and gt_valid_points in line 82 not line 84?

BTW, for the baseline mesh files in newer college dataset, have them already been cropped by the reference mesh file?

DeepDuke commented 2 months ago

Another problem is the mesh generated on the cofusion dataset didn't remove the trace of moving points. image

Different from the result in Figure 5: image

StarryN commented 2 months ago

Hello @DeepDuke. Thanks for your interest.

  1. Yes, it was a mistake that we made during the code refactoring process. But all the numbers we reported in the paper are calculated between sampled_points and gt_valid_points. Thank you very much for pointing it out.

  2. We didn't crop meshes from baseline methods. It will remove dynamics if you apply it to NKSR's results. And for other methods (Shine and VDB-fusion), it doesn't influence the final score too much.

  3. It looks strange. I created a new conda environment with cuda-11.8 and torch-2.12 and then rerun the code, here is the mesh I got : image Did you make any adjustments to the config file? Also, could you please tell me what environment you are using?

DeepDuke commented 2 months ago

Hi @StarryN , I found that I mistakely delete part lines of static_mapping.py when I was drinking water. The bottle hit on the keyboard... Now the cofusion dataset result is good : image

DeepDuke commented 2 months ago

I was curious: The mesh generated from 4dNDF looks better than several baselines at least for the completeness from human eyes can see. But when I test the uncropped mesh result, the evaluation metric for accuracy is quite bad (Please see the green lines in the screenshot). What's the reason? image

I can obtain similar results to your paper. I was curious why the uncropped mesh deviates so much from the cropped mesh? As for the baseline methods, they are not cropped but still get good evaluation results.

DeepDuke commented 2 months ago

Another question is how do you generate the reference mesh file for the newer college dataset? Why not use the ncd_quad_gt_pc.ply ? Thanks !

StarryN commented 2 months ago

Hi @DeepDuke. It's good to see you reproduce the cofusion's result. Regarding your questions : 1. Why the uncropped mesh deviates so much from the cropped mesh? The input scans have larger coverage regions than the ground truth ncd_quad_gt_pc.ply. As you observed, the meshes generated in the extra regions lead to poor Accuracy scores (Distance from est mesh to gt mesh). So we have to manually crop the meshes from all the methods based on ncd_quad_gt_pc.ply. In this step, the meshes (including ours and meshes from baselines) are cropped with the same mask using CloudCompare. To make sure the result is reproducible, we store the cropped mesh as the reference to do this cropping automatically.

_2. Why not use the ncd_quad_gt_pc.ply ?_ Using the grounding truth mesh directly will also remove the dynamic objects which can't be removed in NKSR. And I think it's not fair to only use it in our method. I tried to use ncd_quad_gt_pc.ply to crop our mesh with a resolution of 0.5 meters and got even better numbers. image

DeepDuke commented 2 months ago

@StarryN Hi, so the reference mesh file for newer college is manually generated using CloudCompare to crop previous 4dNDF result with ncd_quad_gt_pc.ply? I'm not very familiar with CloudCompare, I guess it has a function to directly crop a mesh based on point cloud?

StarryN commented 2 months ago
  1. Yes, it is.
  2. We use the "segment" button to crop the mesh multiple times using ncd_quad_gt_pc.ply as a reference. Screenshot from 2024-07-06 17-23-48
DeepDuke commented 2 months ago

@StarryN Thanks for your detailed answers. You guys are really nice and doing interesting work. I learn a lot from your code.