PRBonn / SHINE_mapping

🌟 SHINE-Mapping: Large-Scale 3D Mapping Using Sparse Hierarchical Implicit Neural Representations (ICRA 2023)
MIT License
447 stars 32 forks source link

eval issue #34

Closed whwh747 closed 5 months ago

whwh747 commented 5 months ago

I'm very grateful for your excellent work. However, I've encountered some issues while assessing the reconstruction quality. How do I compare the .ply file I obtained with the actual .ply file? I haven't been able to find the real .ply file in maicity, only the mai_city_block.ply. Assuming that I now have the .ply files for sequences 00 01 02 through the code, how should I evaluate the completeness of the reconstruction and other criteria? I’d greatly appreciate your help in resolving this issue.

YuePanEdward commented 5 months ago

Thanks for your interest in our work. You may use the following command to download the MaiCity data. It would also download the ground truth model gt_map_pc_mai.ply for sequence 01.

sh ./scripts/download_maicity.sh

Note that we only use sequence 01 for the evaluation.

whwh747 commented 5 months ago

Thank you for your response. I've downloaded the maicity dataset and I'm using the final_mesh.ply I obtained from running the code as the prediction, and the gt_map_pc_mai.ply as the ground truth. Can I use the evaluation code you provided to assess the reconstruction? Is there any important step that I might have missed?

YuePanEdward commented 5 months ago

Yes, I think so.

whwh747 commented 5 months ago

To ensure that I've completely understood your points, I would like to know if the five criteria for evaluating map quality in your paper correspond separately to the five results output by evaluator.py: MAE_accuracy (m), MAE_completeness (m), Chamfer_L1 (m), Recall [Completeness] (%), and F-score (%). I hope you don't mind my asking these questions, I am new to this field and my inquiries might seem naive. I'm sorry for any inconvenience.

YuePanEdward commented 5 months ago

Don't worry. Yes, you are correct.
Note that for the accuracy (MAE_accuracy) metric, as also mentioned in the paper, we compute a fairer accuracy metric using the ground truth point cloud masked by the intersection of the reconstructed meshes of all the compared methods. To generate such masked ground truth point clouds, you can configure the data path in ./eval/crop_intersection.py and then run it. You may also refer to https://github.com/PRBonn/SHINE_mapping/issues/3#issuecomment-1367416983. I hope it helps.

whwh747 commented 5 months ago

thank you,i got it,i will close this issue!