JiahuiYu / generative_inpainting

DeepFill v1/v2 with Contextual Attention and Gated Convolution, CVPR 2018, and ICCV 2019 Oral
http://jiahuiyu.com/deepfill/
Other
3.24k stars 782 forks source link

Results seem to have more contextual attention than understanding edges #341

Closed jat923 closed 4 years ago

jat923 commented 4 years ago

Hi, After training on my dataset, the filled result seem to have more impact from contextual attention rather than the shapes. The model is not learning the feature and shape. landsat1429 output1

Is there a way to minimize the imapct in the contextual layer?

JiahuiYu commented 4 years ago

Hi, the results you showed here is normal at the early training stage of the training. It does not relate to contextual attention. I would suggest you to debug on other aspects, for example, training with more iterations, making sure your dataset is relatively large, etc.

If you want to minimize the impact from contextual attention, you can simply remove that layer in the model file.

jat923 commented 4 years ago

Thanks for the quick reply! Well i trained for 20000 epochs with batch size 2 and 10 iter per epoch on around 50000 data. Do you think its good enough and still getting such result means that somethings wrong?(may be with data)

JiahuiYu commented 4 years ago

Your batch size is too small. That's a big issue. Normally I use at least 16, otherwise the training will be very unstable and the results will not be good.

Please have a read of papers in GAN literature, e.g., BigGAN.

jat923 commented 4 years ago

I had to keep the batch size to avoid getting OOM errors. Thanks for your valuable directions. I will try these out.