uoguelph-mlrg / theano_alexnet

Theano-based Alexnet
BSD 3-Clause "New" or "Revised" License
229 stars 115 forks source link

Training Cost NAN #27

Open jiangqy opened 8 years ago

jiangqy commented 8 years ago

Hi, I would like to train AlexNet on ImageNet. While after 20 iterations, training cost becomes nan. Here are the details:

d8yaw cmo5 ynyr t_ krs

Should I set a smaller learning rate? Could you give me some suggestions?

Thank you~

hma02 commented 8 years ago

@jiangqy What is your batch size and current learning rate?

heipangpang commented 8 years ago

I have met the same problem, and my batch size is 256, learning rate is 0.01. Do you have any ideas?

jiangqy commented 8 years ago

@hma02 My batch size is 256 and learning rate is 0.01, too.

hma02 commented 8 years ago

@jiangqy @heipangpang Looks like you are running the single GPU train.py, then the problem is not related to weight exchanging.

The cost should be around 6.9 initially.

The unbounded cost value maybe caused by gradient explosion. I got into similar situations when initializing a deep network with arrays of large variance and mean. Too large learning rates and batch sizes may result in strong gradient zigzag as well.

Also do check the input images when loading them to see if they are preprocessed correctly and correspond to loaded labels You can show them using similar tricks as here. Try using a stack of image_means as input data

heipangpang commented 8 years ago

@hma02 I will try it. Thank you very much.

heipangpang commented 8 years ago

@hma02 When I check the output of every layer, I found that for the layer_input, I got a zero matrix which may be the problem why I such a large training loss.

hma02 commented 8 years ago

@heipangpang Yes, this probably is the reason you got large cost. Make sure you set use_data_layer to False in config.yaml. Then the layer_input should be equal to x as shown here, which is the input batch. If x is a zero matrix, there's something wrong with the preprocessed image batches.

heipangpang commented 8 years ago

@hma02 But when I load the batches hand by hand in python, it seems that I can get the correct results. Thank you very much.

heipangpang commented 8 years ago

@hma02 I am getting the correct results now, thank you very much.

liaocs2008 commented 7 years ago

I had the same problem here. If "para_load" is set False, I could train it normally. But I think one of great contributions of this work is the parallel loading right?

Magotraa commented 7 years ago

@heipangpang Can you please share, what change exactly made it possible for you to get correct results.

As you wrote "I am getting the correct results now, thank you very much."