Closed xvshiting closed 6 years ago
Traceback (most recent call last): File "train.py", line 250, in
main() File "train.py", line 96, in main val_loss = validate(args, epoch, trainer, dataset, subset, num_gpus) File "train.py", line 213, in validate skip_invalid_size_inputs_valid_test=args.skip_invalid_size_inputs_valid_test) File "/home/xushiting/workspace/pytorch/fairseq-py/fairseq/data.py", line 109, in dataloader ignore_invalid_inputs=skip_invalid_size_inputs_valid_test)) File "/home/xushiting/workspace/pytorch/fairseq-py/fairseq/data.py", line 255, in batches_by_size "none" if dst is None else dst.sizes[idx])) Exception: Unable to handle input id 6045 of size 986 / 1057.
It seems that you have a sequence that's too long. Try increasing --max-positions
, or if you prefer to skip these examples you can use --skip-invalid-size-inputs-valid-test
.
python train.py data-bin/Edit_distance_data --lr 0.25 --clip-norm 0.1 --dropout 0.2 --max-tokens 4000 --arch fconv_iwslt_de_en --save-dir /home/robert_tien/work/pytorch/models/Edite_Distance_check_point/fconv
This is my command
The default value for --max-positions
is 1024. Try increasing it with --max-positions 2048
.
@myleott Thank you ! --skip_invalid_size_inputs_valid_test
is work for me! Dose the augment '--max-positions' represent the size of positional encoder?
Yes, it's the max size of the positional encoder embedding.
@myleott Thanks again!
when I was training a fconv model ,I got this problem! I just updated my dataset and re-preprocessed them to generate bin and idx.