yu4u / age-gender-estimation

Keras implementation of a CNN network for age and gender estimation
MIT License
1.47k stars 503 forks source link

Epoch 00001: val_loss did not improve from inf #97

Open yimjunhyuck2 opened 5 years ago

yimjunhyuck2 commented 5 years ago

I tried training through the following command but failed : _sudo python3 ./train.py --input data/imdb_db.mat --nbepochs 10 --depth 10. Every epoch went on with message _"Epoch 000XX: valloss did not improve from inf". Also after train.py was done, I tried plotting by "sudo python3 ./plot.history.py" but it also failed with a message "_File models/history_168.h5 does not exist". I wonder if it's because I'm missing something.

yimjunhyuck2 commented 5 years ago

During training, the value of loss become nan. I can't find how to solve this problem.

yu4u commented 5 years ago

How did you create data/imdb_db.ma?

Try different learning rate lr.

yimjunhyuck2 commented 5 years ago

I created __data/imdb_db.mat and tried lr__ 0.6, 0.8, 1.5. But the nan loss problem still occur. :(

yimjunhyuck2 commented 5 years ago

Is this a problem with tensorflow or keras?

youngmihuang commented 4 years ago

I want to know if there's solution from it? I have the same problem here.

yimjunhyuck2 commented 4 years ago

In my case, the default was too high. There was some improvement when I set the weight 0.005~0.01. Sorry for my poor English and I wish you get the result you wanted. -----Original Message----- From: "Shao Hsuan Huang"notifications@github.com To: "yu4u/age-gender-estimation"age-gender-estimation@noreply.github.com; Cc: "yimjunhyuck2"yimjunhyuck@naver.com; "Author"author@noreply.github.com; Sent: 2020-08-25 (화) 17:35:21 (GMT+09:00) Subject: Re: [yu4u/age-gender-estimation] Epoch 00001: val_loss did not improve from inf (#97)

I want to know if there's solution from it? I have the same problem here. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

youngmihuang commented 4 years ago

Thank for your suggestion. It seems that decreasing the learning rate lower than 0.01, and the loss would be improved (not NaN)?