liuqk3 / PUT

Paper 'Transformer based Pluralistic Image Completion with Reduced Information Loss' in TPAMI 2024 and 'Reduce Information Loss in Transformers for Pluralistic Image Inpainting' in CVPR2022
MIT License
173 stars 15 forks source link

How to select the best model for testing? #3

Closed LonglongaaaGo closed 2 years ago

LonglongaaaGo commented 2 years ago

Hi @liuqk3, Thank you so much for your awesome work. Can you give some insights on how to select the best model for P-VQVAE and UQ-transformer during training? I found the input image is also masked for P-VQVAE, so fid, lpips would not correct when we directly use the masked gt and reconstructed image.

liuqk3 commented 2 years ago

Thanks for your intrests in our work. We do not select the best model. In our experiments, we just set the maximum number of epochs and get the final trained model. According to my experience, the more steps the model is trained, the better the performance you will get.

LonglongaaaGo commented 2 years ago

Thanks for your intrests in our work. We do not select the best model. In our experiments, we just set the maximum number of epochs and get the final trained model. According to my experience, the more steps the model is trained, the better the performance you will get.

Wow, that's really cool. Thank you so much!