salesforce / awd-lstm-lm

LSTM and QRNN Language Model Toolkit for PyTorch
BSD 3-Clause "New" or "Revised" License
1.96k stars 488 forks source link

Correct way to continue training? #19

Closed banyh closed 6 years ago

banyh commented 6 years ago

My training is interupted at epoch 150. For continuing python main.py training, I've adde a new argument:

parser.add_argument('--load', type=str, default='',
                    help='path to load the final model')

and modifed model instantiation:

if not args.load:
    model = model.RNNModel(args.model, ntokens, args.emsize, args.nhid, args.nlayers, args.dropout, args.dropouth, args.dropouti, args.dropoute, args.wdrop, args.tied)
else:
    with open(args.load, 'rb') as f:
        model = torch.load(f)

Then run the training procedure: python3 -u main.py --model QRNN --batch_size 20 --clip 0.2 --wdrop 0.1 --nhid 1550 --nlayers 4 --emsize 400 --dropouth 0.3 --seed 9001 --dropouti 0.4 --epochs 400 --save PTB.pt --load PTB.pt

Does following logs look fine?

| end of epoch   1 | time: 103.37s | valid loss  4.19 | valid ppl    65.86
| end of epoch   2 | time: 107.36s | valid loss  4.20 | valid ppl    66.46
| end of epoch   3 | time: 105.37s | valid loss  4.19 | valid ppl    66.01
| end of epoch   4 | time: 106.24s | valid loss  4.20 | valid ppl    66.56
| end of epoch   5 | time: 101.58s | valid loss  4.20 | valid ppl    66.42
| end of epoch   6 | time: 102.41s | valid loss  4.19 | valid ppl    66.22
| end of epoch   7 | time: 104.01s | valid loss  4.19 | valid ppl    66.00
Switching!
| end of epoch   8 | time: 110.03s | valid loss  4.14 | valid ppl    62.92
| end of epoch   9 | time: 109.40s | valid loss  4.14 | valid ppl    62.67
| end of epoch  10 | time: 109.45s | valid loss  4.14 | valid ppl    62.52
| end of epoch  11 | time: 110.47s | valid loss  4.13 | valid ppl    62.39
| end of epoch  12 | time: 111.34s | valid loss  4.13 | valid ppl    62.30
| end of epoch  13 | time: 107.84s | valid loss  4.13 | valid ppl    62.25
keskarnitish commented 6 years ago

I would recommend dumping the optimizer state to disk as well. That way, it's more general.

banyh commented 6 years ago

Fixed in cc26a55f2525ed3801bd6c196716e9f330484af4