imatge-upc / salgan

SalGAN: Visual Saliency Prediction with Generative Adversarial Networks
https://imatge-upc.github.io/salgan
MIT License
368 stars 106 forks source link

NaN loss when training from scratch #9

Closed saeedizadi closed 7 years ago

saeedizadi commented 7 years ago

Hi, I tried to train your model on my own dataset consisting of (RGB image, Binary mask) pairs which both are images (and not Mat file). However, after several epochs, I get NaN train loss. what's the problem? I modified your 01-preprocess_data code to use binary images as the ground truth. does it interfere the training?

junting commented 7 years ago

Hello @saeedizadi ,

I do not think that you are getting NaN train loss because the use of Binary mask. Please try to use a smaller learning rate, maybe the defaults learning rate is making your training diverging.

saeedizadi commented 7 years ago

Yes, you are right. fixed