paul007pl / MVP_Benchmark

MVP Benchmark for Multi-View Partial Point Cloud Completion and Registration
https://mvp-dataset.github.io/
Apache License 2.0
115 stars 10 forks source link

About the trained model for baseline methods #3

Closed amiltonwong closed 2 years ago

amiltonwong commented 2 years ago

Hi, @paul007pl ,

Thanks for releasing the code for the benchmark and baseline models. Do you also provide the trained models (checkpoints) for baseline methods (e.g. [1] PCN; [2] ECG; [3] VRCNet)? It'll be useful for having consistent comparison results.

Thanks~

paul007pl commented 2 years ago

Thanks for your suggestion. I have the pretrained models, and I may provide those results later.

Currently, a "blank" benchmark can somehow encourage every participator to try and understand different methods. Another concern is that everyone would like to report their best results, which may not be easy to achieve. Rather than reporting very good results at the begining, I think it is open for participators to test and evaluate each method by themselves.

Because the "Train Dataset" and the "Test Dataset" are the same with the released MVP dataset (2048 points), at current stage, you can check our CVPR2021 paper to compare different methods (see Table 4) https://arxiv.org/abs/2104.10154.

In short, the results on the "Test Dataset" are consistent (Table 4), which can be used in your report and research paper; and inconsistent results on the "Extra-Test Dataset" can be okay at current stage. Good luck and have fun~

amiltonwong commented 2 years ago

Thanks @paul007pl , I'll train it first.