clinplayer / Point2Skeleton

Point2Skeleton: Learning Skeletal Representations from Point Clouds (CVPR2021)
MIT License
206 stars 38 forks source link

Questions about evaluation #17

Closed congxin0920 closed 2 years ago

congxin0920 commented 2 years ago

Hello, I'm sorry to bother you again. I used the method you mentioned above to evaluate the pre-trained model you gave, but the effect is worse than that in your paper. Could you please elaborate on the evaluation method?

clinplayer commented 2 years ago

I'm not sure what you exactly mean by "effect is worse". Are the numbers in a comparable scale with the paper? Do you normalize these numbers based on the sample numbers, i.e., 1/M*Points2Spheres + 1/N*Spheres2Points? Note due to the random sampling, the numbers cannot be exactly the same.

congxin0920 commented 2 years ago

My evaluation results are: Average 0.0429 0.1339 0.1126 0.2028 Are the results normal? I normalized these numbers based on the sample numbers, i.e., 1/MPoints2Spheres + 1/NSpheres2Points.

clinplayer commented 2 years ago

Generally they look reasonable. Some numbers are better than the reported and some are worse. And one more thing, see L62-63 in test.py, please use util.rand_sample_points_on_skeleton_mesh to densely sample on the skeletal mesh, rather than directly using the predicted skeletal points.

congxin0920 commented 2 years ago

To compute CD-recon and HD-recon ,Whether the input point cloud is a dense point cloud or a point cloud after sampling?

congxin0920 commented 2 years ago

Hi, I'm sorry to bother you again. Whether the average evaluation results(0.0372,0.1424,0.08828,0.1898) in your paper is obtained by dividing the sum of the results of each category by the number of categories.

clinplayer commented 2 years ago

Whether the input point cloud is a dense point cloud or a point cloud after sampling? -- Do you mean groudtruth? The groundtruth point clouds for evaluation are dense ones, which are given in the README. Whether the average evaluation results(0.0372,0.1424,0.08828,0.1898) in your paper is obtained by dividing the sum of the results of each category by the number of categories. -- No, it is obtained by the mean of all the individual shapes.

congxin0920 commented 2 years ago

I'm not sure what you exactly mean by "effect is worse". Are the numbers in a comparable scale with the paper? Do you normalize these numbers based on the sample numbers, i.e., 1/MPoints2Spheres + 1/NSpheres2Points? Note due to the random sampling, the numbers cannot be exactly the same.

Sorry I also have a little problem, What is the M value in the above formula(1/MPoints2Spheres + 1/NSpheres2Points) when evaluating CD-RECON?

clinplayer commented 2 years ago

I'm not sure what you exactly mean by "effect is worse". Are the numbers in a comparable scale with the paper? Do you normalize these numbers based on the sample numbers, i.e., 1/M_Points2Spheres + 1/N_Spheres2Points? Note due to the random sampling, the numbers cannot be exactly the same.

Sorry I also have a little problem, What is the M value in the above formula(1/M_Points2Spheres + 1/N_Spheres2Points) when evaluating CD-RECON?

They are just parameters for averaging. M is the number of points of a groundtruth point cloud, and N is the number of skeletal spheres.