Closed HelloXiaoZHU closed 1 year ago
Thanks for your interest on our work.
I suppose the first evaluation result you showed is generated using the mai_city_block.ply
file in the gt_models
folder. This is the model used for simulation in Carla and there's a shift between the model's and the point cloud's coordinate system. It's the reason why the evaluation results seem to be completely wrong.
We used the gt_map_pc_mai.ply
point cloud file as the reference, which should be corresponding to the second evaluation result you showed. The difference between what you showed and what we reported in the paper is caused by different parameter settings. In the paper, different from the default parameter setting in the open source config file, we used the 10cm leaf_vox_size
, 50m pc_radius_m
and with ekional loss on. Additionally, as mentioned in the paper, we use a fairer accuracy metric by taking the ground truth point cloud masked by the intersection of the reconstructed meshes of all the compared methods.
The reconstructed mesh and the cropped intersection reference point cloud can be downloaded from here. You can use them to get the similar results as reported in our paper.
Thanks.
Hi, I run your evaluation code using the meshes you provided. And I get the great results.
This is using gt_map_pc_mai.ply as the reference.
And this is using the cropped intersection reference as the reference.
The results are better than the paper results. But I still wonder why it is not same as paper results. Is it effected by the version of open3d or any other thing? In my conda env , the version of open3d is 0.10.0.0
I know the computation of evaluation metrics have some randonness, and the result is right. Sorry to bother.
Hello! Thanks for your nice work!
I run your code and get the reconstructed mesh of good quality for the maicity.
But when I used the evalution code to qualitify the mesh, I find it not the same as the paper . I used the ground truth of maicity dataset.
And this result is used the pointclouds map.
And this is the paper results.
And I'm confused with the resluts of evalution results. What do you compare it with?