zaiweizhang / H3DNet

MIT License
211 stars 25 forks source link

Request for trained models. #11

Closed xiaodongww closed 3 years ago

xiaodongww commented 3 years ago

Hi, is it possible to provide the model (ScanNet and SUN RGB-D) reported in your paper?

Thanks very much.

zaiweizhang commented 3 years ago

Hi,

Here is a link to scannet and sunrgbd models with corresponding log file. https://drive.google.com/file/d/1WGAMrG3cyPkFRHPBdHgwhXfuKFLAQSCh/view?usp=sharing

I am a bit curious. Are you not able to reproduce the results? I have tried our code under different machines/clusters. It seems working fine.

In addition, other people can reproduce the results. https://github.com/zaiweizhang/H3DNet/issues/9 https://github.com/zaiweizhang/H3DNet/issues/5

We did not upload the models. For some reason, eval.py often produces slightly lower number compared to the same checkpoint evaluated during training. This issue appears in VoteNet. We use the same codebase so this remains. Thus, I also included the log files.

xiaodongww commented 3 years ago

Thanks for your quick reply. I need one well-trained model to do some comparison. However, it is a long time since the last time I trained the model. As a result, I am not very sure whether my model file is properly and well trained. I remember that the results were right, but that model failed. I think that model may be mistakenly rewritten by a trial experiment.

As for the evaluation disparity between training and testing, is the performance evaluated during training reported in the paper?

Thanks again for your help. Your work is very enlightening.

zaiweizhang commented 3 years ago

Yes. We report the performance evaluated during training. You can look at train.py for how we do the evaluation.

No problem. I am always ready to help.