facebookresearch / fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
MIT License
30.19k stars 6.38k forks source link

Fairseq-train Hangs in a loop of validation and saving checkpoint while using truncated_bptt task #3603

Open Mohamed-E-Fayed opened 3 years ago

Mohamed-E-Fayed commented 3 years ago

I want to train transformer XL using truncated BPTT task. But, the training gets stuck between validation and saving checkpoint.

I noticed that it occurs when I stop training and resumes it. The next validation hangs as mentioned, i.e. if I stop training at 49100 updates, and resumes, it hangs on validation of 56000. If I don't stop it, it continues normally.

Since I use Slurm cluster, I have to set time limit for training, and it should stop before finish training.

Thanks in advance.

Environment:

Red Hat Enterprise 7.7 Slurm cluster pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html Git clone https://github.com/pytorch/fairseq Cd fairseq/ Pip install -e . Pip install transformers

How to reproduce :

38 src=en 39 ARCH=transformer_xl 40 DATAPATH=data-bin 41 NUM_LAYERS=16 42 SAVEDIR=checkpoints_64 43

44 srun --export=ALL \ 45 $(which fairseq-train) $DATAPATH \ 46 --user-dir ~/fairseq/examples/truncated_bptt \ 47 --ddp-backend no_c10d \ 48 --task truncated_bptt_lm --tokens-per-sample 150 \ 49 --distributed-port 12350 \ 50 -a $ARCH --optimizer adam --lr 0.00025 --min-lr 0.0 \ 51 --n-layer $NUM_LAYERS --d-model 410 --n-head 10 \ 52 --d-head 41 --d-inner 2300 --mem-len 256 \ 53 --save-dir $SAVEDIR \ 54 --dropout 0.1 --clip-norm 0.25 --lr-scheduler cosine --weight-decay 0.0001 \ 55 --max-update 15000000 --warmup-updates 0 \ 56 --update-freq 1 --max-epoch 25 \ 57 --batch-size 32 --num-workers 8 --memory-efficient-fp16 --fp16-scale-tolerance=0.25 --min-loss-scale=0.001 \ 58 --skip-invalid-size-inputs-valid-test \ 59 --save-interval-updates 7000 --keep-interval-updates 10 --keep-best-checkpoints 10 \ 60 | tee -a $SAVEDIR/training.log

OUTPUT:

This is a subset of the output: 2021-06-09 11:02:40 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 256.0 2021-06-09 11:02:40 | INFO | fairseq_cli.train | begin validation on "valid" subset 2021-06-09 11:03:24 | INFO | valid | epoch 001 | valid on 'valid' subset | loss 1.764 | ppl 3.4 | wps 77178.7 | wpb 9600 | bsz 64 | num_updates 105000 | best_loss 1.763 2021-06-09 11:03:24 | INFO | fairseq.checkpoint_utils | Preparing to save checkpoint for epoch 1 @ 105000 updates 2021-06-09 11:03:24 | INFO | fairseq.trainer | Saving checkpoint to checkpoints_64/checkpoint_1_105000.pt 2021-06-09 11:03:26 | INFO | fairseq.trainer | Finished saving checkpoint to checkpoints_64/checkpoint_1_105000.pt 2021-06-09 11:03:35 | INFO | fairseq.checkpoint_utils | Saved checkpoint checkpoints_64/checkpoint_1_105000.pt (epoch 1 @ 105000 updates, score 1.764) (writing took 10.877134052105248 seconds) 2021-06-09 11:03:36 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 256.0 2021-06-09 11:03:36 | INFO | fairseq_cli.train | begin validation on "valid" subset 2021-06-09 11:04:19 | INFO | valid | epoch 001 | valid on 'valid' subset | loss 1.763 | ppl 3.39 | wps 83393 | wpb 9600 | bsz 64 | num_updates 105000 | best_loss 1.763 2021-06-09 11:04:19 | INFO | fairseq.checkpoint_utils | Preparing to save checkpoint for epoch 1 @ 105000 updates 2021-06-09 11:04:19 | INFO | fairseq.trainer | Saving checkpoint to checkpoints_64/checkpoint_1_105000.pt 2021-06-09 11:04:21 | INFO | fairseq.trainer | Finished saving checkpoint to checkpoints_64/checkpoint_1_105000.pt 2021-06-09 11:04:36 | INFO | fairseq.checkpoint_utils | Saved checkpoint checkpoints_64/checkpoint_1_105000.pt (epoch 1 @ 105000 updates, score 1.763) (writing took 16.193258486222476 seconds) 2021-06-09 11:04:36 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 256.0 2021-06-09 11:04:36 | INFO | fairseq_cli.train | begin validation on "valid" subset 2021-06-09 11:05:19 | INFO | valid | epoch 001 | valid on 'valid' subset | loss 1.763 | ppl 3.4 | wps 82632 | wpb 9600 | bsz 64 | num_updates 105000 | best_loss 1.763 2021-06-09 11:05:19 | INFO | fairseq.checkpoint_utils | Preparing to save checkpoint for epoch 1 @ 105000 updates 2021-06-09 11:05:19 | INFO | fairseq.trainer | Saving checkpoint to checkpoints_64/checkpoint_1_105000.pt 2021-06-09 11:05:21 | INFO | fairseq.trainer | Finished saving checkpoint to checkpoints_64/checkpoint_1_105000.pt 2021-06-09 11:05:36 | INFO | fairseq.checkpoint_utils | Saved checkpoint checkpoints_64/checkpoint_1_105000.pt (epoch 1 @ 105000 updates, score 1.763) (writing took 16.407197644002736 seconds) 2021-06-09 11:05:36 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 256.0 2021-06-09 11:05:36 | INFO | fairseq_cli.train | begin validation on "valid" subset 2021-06-09 11:06:20 | INFO | valid | epoch 001 | valid on 'valid' subset | loss 1.763 | ppl 3.4 | wps 74971.5 | wpb 9600 | bsz 64 | num_updates 105000 | best_loss 1.763 2021-06-09 11:06:20 | INFO | fairseq.checkpoint_utils | Preparing to save checkpoint for epoch 1 @ 105000 updates 2021-06-09 11:06:20 | INFO | fairseq.trainer | Saving checkpoint to checkpoints_64/checkpoint_1_105000.pt 2021-06-09 11:06:21 | INFO | fairseq.trainer | Finished saving checkpoint to checkpoints_64/checkpoint_1_105000.pt 2021-06-09 11:06:35 | INFO | fairseq.checkpoint_utils | Saved checkpoint checkpoints_64/checkpoint_1_105000.pt (epoch 1 @ 105000 updates, score 1.763) (writing took 15.376382765825838 seconds) 2021-06-09 11:06:35 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 256.0 2021-06-09 11:06:35 | INFO | fairseq_cli.train | begin validation on "valid" subset

This warning appears in the error file, though I think it has no relation: ~/fairseq/fairseq/utils.py:368: UserWarning: amp_C fused kernels unavailable, disabling multi_tensor_l2norm; you may get better performance by installing NVIDIA's apex library

guo453585719 commented 2 years ago

I had the same problem