Closed DeepDuke closed 2 months ago
Another problem is the mesh generated on the cofusion dataset didn't remove the trace of moving points.
Different from the result in Figure 5:
Hello @DeepDuke. Thanks for your interest.
Yes, it was a mistake that we made during the code refactoring process. But all the numbers we reported in the paper are calculated between sampled_points
and gt_valid_points
. Thank you very much for pointing it out.
We didn't crop meshes from baseline methods. It will remove dynamics if you apply it to NKSR's results. And for other methods (Shine and VDB-fusion), it doesn't influence the final score too much.
It looks strange. I created a new conda environment with cuda-11.8 and torch-2.12 and then rerun the code, here is the mesh I got : Did you make any adjustments to the config file? Also, could you please tell me what environment you are using?
Hi @StarryN , I found that I mistakely delete part lines of static_mapping.py
when I was drinking water. The bottle hit on the keyboard... Now the cofusion dataset result is good :
I was curious: The mesh generated from 4dNDF looks better than several baselines at least for the completeness from human eyes can see. But when I test the uncropped mesh result, the evaluation metric for accuracy is quite bad (Please see the green lines in the screenshot). What's the reason?
I can obtain similar results to your paper. I was curious why the uncropped mesh deviates so much from the cropped mesh? As for the baseline methods, they are not cropped but still get good evaluation results.
Another question is how do you generate the reference mesh file for the newer college dataset? Why not use the ncd_quad_gt_pc.ply
? Thanks !
Hi @DeepDuke. It's good to see you reproduce the cofusion's result. Regarding your questions :
1. Why the uncropped mesh deviates so much from the cropped mesh?
The input scans have larger coverage regions than the ground truth ncd_quad_gt_pc.ply
. As you observed, the meshes generated in the extra regions lead to poor Accuracy scores (Distance from est mesh to gt mesh). So we have to manually crop the meshes from all the methods based on ncd_quad_gt_pc.ply
. In this step, the meshes (including ours and meshes from baselines) are cropped with the same mask using CloudCompare. To make sure the result is reproducible, we store the cropped mesh as the reference to do this cropping automatically.
_2. Why not use the ncd_quad_gt_pc.ply
?_
Using the grounding truth mesh directly will also remove the dynamic objects which can't be removed in NKSR. And I think it's not fair to only use it in our method. I tried to use ncd_quad_gt_pc.ply
to crop our mesh with a resolution of 0.5 meters and got even better numbers.
@StarryN Hi, so the reference mesh file for newer college is manually generated using CloudCompare to crop previous 4dNDF result with ncd_quad_gt_pc.ply? I'm not very familiar with CloudCompare, I guess it has a function to directly crop a mesh based on point cloud?
ncd_quad_gt_pc.ply
as a reference.
@StarryN Thanks for your detailed answers. You guys are really nice and doing interesting work. I learn a lot from your code.
https://github.com/PRBonn/4dNDF/blob/3d4fa1f9dd970fc3d6f848cf1f6121d53a291e79/eval/eval_newercollege.py#L81-L84
It seems
results
should come from the computation betweensampled_points
andgt_valid_points
in line 82 not line 84?BTW, for the baseline mesh files in newer college dataset, have them already been cropped by the reference mesh file?