yccyenchicheng / AutoSDF

237 stars 29 forks source link

About Evaluation Scripts #14

Open jjongs97 opened 2 years ago

jjongs97 commented 2 years ago

Thanks for releasing the codes of this awesome work! Could you please provide the evaluation scripts? Because I have confused with evaluation of multimodal completion. Thank you.

yccyenchicheng commented 2 years ago

Thanks!

For the multimodal completion, we follow the script provided by MSC: https://github.com/ChrisWu1997/Multimodal-Shape-Completion/tree/master/evaluation. We will also provide the script soon.

Please also let me know which part of the evaluation confuses you such that I can clarify it for you. Sorry for the inconvenience.

jjongs97 commented 2 years ago

I don't fully understand the description in the paper. "For a fair comparison, we also give the baseline methods additional points within the truncation threshold." So I would like to ask a question about the setting.

yccyenchicheng commented 1 year ago

Hi, sorry for the late reply!

Because we fill 0.2 for the missing regions, it essentially give the model some information about the boundaries, which the point cloud based model do not have. So for a fair comparison, we use less SDF grids for computing the metrics. For instance, suppose the resolution is 64 and the bottom-half region of the shape is missing. Then in our case we fill 33 (or more) instead of 32 of the grid with .2 to indicate the missing regions. This essentially gives other methods more points during the evaluation.

youngstu commented 1 year ago

Thanks!

For the multimodal completion, we follow the script provided by MSC: https://github.com/ChrisWu1997/Multimodal-Shape-Completion/tree/master/evaluation. We will also provide the script soon.

Please also let me know which part of the evaluation confuses you such that I can clarify it for you. Sorry for the inconvenience.

Could you provide the language-guided evaluation script for subsequent work metric alignment and referencing.

Very Thanks.