amzn / convolutional-handwriting-gan

ScrabbleGAN: Semi-Supervised Varying Length Handwritten Text Generation (CVPR20)
https://www.amazon.science/publications/scrabblegan-semi-supervised-varying-length-handwritten-text-generation
MIT License
264 stars 55 forks source link

Training on numbers dataset ! #8

Open AhmedAl93 opened 3 years ago

AhmedAl93 commented 3 years ago

Hello,

Thank you for this amazing work, really useful for the DS community !

I have an issue when I try to train the model from scratch on images containing mainly digits (either dates "dd/mm/yy" or simple sequences from in ICDAR 2013 dataset) .

The problem is that, at some point, generator hinge loss becomes NAN (in ScrabbleGAN_baseModel.backward_G function), the reason behind this is the tensor "ones_img" in ScrabbleGAN_baseModel.get_current_visuals() becomes NAN in the first place.

Please, I want to know how to avoid this situation, thanks in advance for your help !

P.S. Here are some logs : (loss_G and dis_fake value represent generator hinge loss and ones_img tensor respectively) logs_github

rlit commented 3 years ago

Did you try to run the regular IAM/RIMES experiment from scratch?

AhmedAl93 commented 3 years ago

Yes, I trained successfully the model on IAM from scratch

kymillev commented 3 years ago

I have the same problem after I changed the alphabet (only lowercase letters and digits). Everything worked fine on my own dataset, until I tried again with an adjusted alphabet. After the first epoch, the real and fake OCR loss become negative.

Edit: After changing the alphabet back to the original one (alphabetEnglish), the negative losses disappeared again, so most likely the issue occurs when characters are encoded or decoded?

@AhmedAl93 Have you found a solution to this problem?

AhmedAl93 commented 3 years ago

@kymillev No solution until now :/

rlit commented 3 years ago

Hi @kymillev and @AhmedAl93, Thanks for you interest in this package, and for you patience. We try our best to respond to your questions in this challenging times.

The common cause I found when found these errors for myself was data quality, which caused the NaN loss. This might include

try to filter the data and see if this solves the problem.

darraghdog commented 3 years ago

One addition to @rlit points.... make sure your real images and fake images have similar size distribution. So an example... if your fake image lexicon is all 3 chars wide, then all the fake image will be 48 pixels wide (16 pixel per char is standard for the fake image generator). If the real images are different number of characters, or 3 chars but not resized to 48 wide, the discriminator will learn 48 wide is likely fake; and anything else is real. This point had me stuck for a while, but after fixing this and the above points from @rlit I could train on different alphabet characters (non-ascii).

chiakiphan commented 3 years ago

If the real images are different number of characters, or 3 chars but not resized to 48 wide, the discriminator will learn 48 wide is likely fake; and anything else is real.

Hi @darraghdog So this mean: the number of chars in real image must be equal to fake image, right?

darraghdog commented 3 years ago

An approximate equal distribution of number of characters, resized to same width per character, I found helps a lot.

xiaomaxiao commented 3 years ago

@darraghdog I found the real image padding the same size although has different num of characters . so I should make same number of characters in batch ?