miaodd98 / ITrans

ITrans: Generative Image Inpainting with Transformers, ChinaMM 2023, Multimedia Systems
https://link.springer.com/article/10.1007/s00530-023-01211-w
MIT License
3 stars 0 forks source link

Questions about trainting strategy #1

Open VV-Hope opened 1 month ago

VV-Hope commented 1 month ago

What are your iteration counts on the CelebA-HQ dataset? Why does my model perform well on the training set but poorly on the test set? Could you please provide your training strategy?

miaodd98 commented 1 month ago

Actually the mask dataset has a huge influence on the outcome. I trained about 400 epochs on CelebA-HQ with a fixed learning rate from scratch, and changed the mask dataset in the middle. At the beginning training with small mask and then turning to huge mask for better generation performance. Some of the masks are generated in a way like LaMa: Resolution-robust Large Mask Inpainting with Fourier Convolutions, you can check that.

VV-Hope commented 1 month ago

Thanks for your answer. Do you use --model 3 directly during training?

miaodd98 commented 3 weeks ago

Yes, and sometimes use model 1 to train the edge detection network with different masks at the beginning of training.

VV-Hope commented 3 weeks ago

Thanks for your answer. You are so kindly~