Open to-shimo opened 8 years ago
Nan means that the networked have blown up. It's quite a problem for ReLU activation. "NNet rejected" means that the program recognized the problem and started to train from previous checkpoint (from scratch in your case) with lower LR.
Try reducing LR or using more stable activation function, e.g. relu-trunc
my cmd is: ./rnnlm -rnnlm keji.model.1 -train keji.all.gb.seg -valid keji.random.valid -hidden 128 -hidden-type relu -nce 20 -alpha 0.01 -threads 8 -direct 500 -direct-order 4 -nce-accurate-test 1 -use-cuda 1
traning log as follows:
Epoch 1 lr: 1.00e-02/1.00e-01 progress: 88.58% 380.07 Kwords/sec entropy (bits) valid: -nan elapsed: 461.8s+1263.8s Awful: Nnet rejected Epoch 2 lr: 5.00e-03/5.00e-02 progress: 90.16% 378.44 Kwords/sec entropy (bits) valid: -nan elapsed: 460.7s+1263.8s Awful: Nnet rejected
thx.