HillZhang1999 / SynGEC

Code & data for our EMNLP2022 paper "SynGEC: Syntax-Enhanced Grammatical Error Correction with a Tailored GEC-Oriented Parser"
https://arxiv.org/abs/2210.12484
MIT License
79 stars 14 forks source link

Train SynGEC #14

Closed sunbo1999 closed 1 year ago

sunbo1999 commented 1 year ago

您好,按照您的步骤用自己的数据训练SynGEC出现一点问题 首先按照pipeline_gopar.sh和preprocesssyngec.sh处理好数据之后; 在trainsyngec.sh的过程中,在训练SynGEC时候报错

2022-12-06 15:18:48 | INFO | fairseq_cli.train | task: syntax-enhanced-translation (SyntaxEnhancedTranslationTask)
2022-12-06 15:18:48 | INFO | fairseq_cli.train | model: syntax_enhanced_bart_large (SyntaxEnhancedBARTModel)
2022-12-06 15:18:48 | INFO | fairseq_cli.train | criterion: label_smoothed_cross_entropy (LabelSmoothedCrossEntropyCriterion)
2022-12-06 15:18:48 | INFO | fairseq_cli.train | num. model params: 416394240 (num. trained: 40983552)
2022-12-06 15:18:51 | INFO | fairseq.trainer | detected shared parameter: encoder.sentence_encoder.embed_tokens.weight <- decoder.embed_tokens.weight
2022-12-06 15:18:51 | INFO | fairseq.trainer | detected shared parameter: encoder.sentence_encoder.embed_tokens.weight <- decoder.output_projection.weight
2022-12-06 15:18:51 | INFO | fairseq.utils | ***********************CUDA enviroments for all 1 workers***********************
2022-12-06 15:18:51 | INFO | fairseq.utils | rank   0: capabilities =  7.0  ; total memory = 31.749 GB ; name = Tesla V100-PCIE-32GB                    
2022-12-06 15:18:51 | INFO | fairseq.utils | ***********************CUDA enviroments for all 1 workers***********************
2022-12-06 15:18:51 | INFO | fairseq_cli.train | training on 1 devices (GPUs/TPUs)
2022-12-06 15:18:51 | INFO | fairseq_cli.train | max tokens per GPU = 2048 and max sentences per GPU = None
LabelSmoothedCrossEntropyCriterion LabelSmoothedCrossEntropyCriterion
2022-12-06 15:18:57 | INFO | fairseq.trainer | loaded checkpoint ../../model/chinese_bart_baseline/2022/stage1/checkpoint_best.pt (epoch 11 @ 3683 updates)
2022-12-06 15:18:58 | INFO | fairseq.trainer | loading train data for epoch 11
2022-12-06 15:18:58 | INFO | fairseq.data.data_utils | loaded 11964 examples from: ../../preprocess/chinese_csec_with_syntax_transformer/bin/train.src-tgt.src
2022-12-06 15:18:58 | INFO | fairseq.data.data_utils | loaded 11964 examples from: ../../preprocess/chinese_csec_with_syntax_transformer/bin/train.src-tgt.tgt
2022-12-06 15:18:58 | INFO | syngec_model.syntax_guided_gec_task | ../../preprocess/chinese_csec_with_syntax_transformer/bin train src-tgt 11964 examples
2022-12-06 15:18:58 | INFO | fairseq.data.data_utils | loaded 11964 examples from: ../../preprocess/chinese_csec_with_syntax_transformer/bin/train.conll.src-tgt.src
2022-12-06 15:18:58 | INFO | fairseq.data.data_utils | loaded 11964 examples from: ../../preprocess/chinese_csec_with_syntax_transformer/bin/train.dpd.src-tgt.src
2022-12-06 15:18:58 | INFO | fairseq.data.data_utils | loaded 11964 examples from: ../../preprocess/chinese_csec_with_syntax_transformer/bin/train.probs.src-tgt.src
2022-12-06 15:18:58 | INFO | fairseq.data.language_pair_dataset | success! syntax types: 1, source conll lines: 11964
2022-12-06 15:18:58 | WARNING | fairseq.tasks.fairseq_task | 2 samples have invalid sizes and will be skipped, max_positions=(128, 128), first few sample ids=[376, 367]
2022-12-06 15:18:58 | INFO | fairseq.trainer | begin training epoch 11
Traceback (most recent call last):
  File "../../src/src_syngec/fairseq-0.10.2/fairseq_cli/train.py", line 356, in <module>
    cli_main()
  File "../../src/src_syngec/fairseq-0.10.2/fairseq_cli/train.py", line 352, in cli_main
    distributed_utils.call_main(args, main)
  File "/users10/bsun/CSEC/SynGEC/src/src_syngec/fairseq-0.10.2/fairseq/distributed_utils.py", line 301, in call_main
    main(args, **kwargs)
  File "../../src/src_syngec/fairseq-0.10.2/fairseq_cli/train.py", line 125, in main
    valid_losses, should_stop = train(args, trainer, task, epoch_itr)
  File "/users10/bsun/miniconda3/envs/syngec/lib/python3.8/contextlib.py", line 75, in inner
    return func(*args, **kwds)
  File "../../src/src_syngec/fairseq-0.10.2/fairseq_cli/train.py", line 208, in train
    log_output = trainer.train_step(samples)
  File "/users10/bsun/miniconda3/envs/syngec/lib/python3.8/contextlib.py", line 75, in inner
    return func(*args, **kwds)
  File "/users10/bsun/CSEC/SynGEC/src/src_syngec/fairseq-0.10.2/fairseq/trainer.py", line 662, in train_step
    raise e
  File "/users10/bsun/CSEC/SynGEC/src/src_syngec/fairseq-0.10.2/fairseq/trainer.py", line 638, in train_step
    self.optimizer.step()
  File "/users10/bsun/CSEC/SynGEC/src/src_syngec/fairseq-0.10.2/fairseq/optim/fp16_optimizer.py", line 196, in step
    self.fp32_optimizer.step(closure)
  File "/users10/bsun/CSEC/SynGEC/src/src_syngec/fairseq-0.10.2/fairseq/optim/fairseq_optimizer.py", line 114, in step
    self.optimizer.step(closure)
  File "/users10/bsun/miniconda3/envs/syngec/lib/python3.8/site-packages/torch/optim/optimizer.py", line 109, in wrapper
    return func(*args, **kwargs)
  File "/users10/bsun/CSEC/SynGEC/src/src_syngec/fairseq-0.10.2/fairseq/optim/adam.py", line 199, in step
    exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
RuntimeError: The size of tensor a (375410688) must match the size of tensor b (40983552) at non-singleton dimension 0

我的数据预处理过程按照您的脚本来的,请问问题出在哪呢

HillZhang1999 commented 1 year ago

已经fix了,麻烦重新pull下train脚本

sunbo1999 commented 1 year ago

请问前半部分Train Baseline还需要重新训练吗

HillZhang1999 commented 1 year ago

不需要的

sunbo1999 commented 1 year ago

您好,我在用generate_syngec_bart.sh推理的时候,发现一个小问题:

ID_FILE=$TEST_DIR/src.id
cp $ID_FILE $OUTPUT_DIR/mucgec.id

这里面的ID_FILE不存在,在数据预处理的过程中也没有体现,请问这是什么文件呢?

HillZhang1999 commented 1 year ago

这个是当时训练时的max_length设为64(显存限制),导致预测时一些长句子无法处理,因此做了先切句再预测的操作。 可以参考这个文件根据原始输入文件生成id_file和新的输入文件,预测完成后使用post_process_chinese.py恢复成和原始输入文件一一对应的输出结果。 当然你也可以不用这一步,不进行后处理就用不到这个id文件。

sunbo1999 commented 1 year ago

您好,如果我想针对自己训练的bart-baseline进行推理的话,以下参数需要做哪些更新呢

CUDA_VISIBLE_DEVICES=$CUDA_DEVICE python -u ${FAIRSEQ_DIR}/interactive.py $PROCESSED_DIR/bin \
    --user-dir ../../src/src_syngec/syngec_model \
    --task syntax-enhanced-translation \
    --path ${MODEL_DIR}/checkpoint_best.pt \
    --beam ${BEAM} \
    --nbest ${N_BEST} \
    -s src \
    -t tgt \
    --buffer-size 10000 \
    --batch-size 32 \
    --num-workers 12 \
    --log-format tqdm \
    --remove-bpe \
    --fp16 \
    --conll_file $MuCGEC_TEST_BIN_DIR/test.conll.src-tgt.src \
    --dpd_file $MuCGEC_TEST_BIN_DIR/test.dpd.src-tgt.src \
    --probs_file $MuCGEC_TEST_BIN_DIR/test.probs.src-tgt.src \
    --output_file $OUTPUT_DIR/mucgec.out.nbest \
    < $OUTPUT_DIR/mucgec.src.char

是删掉conll_file dpd_file probs_file即可吗

HillZhang1999 commented 1 year ago

对的

sunbo1999 commented 1 year ago

好的,感谢

sunbo1999 commented 1 year ago

您好,在训练bart-baseline之前的预处理参考preprocess_baseline.sh

python $FAIRSEQ_DIR/preprocess.py --source-lang src --target-lang tgt \
       --user-dir ../../src/src_syngec/syngec_model \
       --task syntax-enhanced-translation \
       --trainpref $PROCESSED_DIR/train.char \
       --validpref $PROCESSED_DIR/valid.char \
       --destdir $PROCESSED_DIR/bin \
       --workers $WORKER_NUM \
       --labeldict ../../data/dicts/syntax_label_gec.dict \
       --srcdict ../../data/dicts/chinese_vocab.count.txt \
       --tgtdict ../../data/dicts/chinese_vocab.count.txt

我理解的bart-baseline是没有加入句法知识的模型,只是用Bart做语法纠错的任务,但是预处理的时候用到了labeldict(syntax_label_gec.dict) ?请问是我理解的有什么问题吗?

HillZhang1999 commented 1 year ago

这里只是传进去但是没有用到,后续考虑fix掉这个

sunbo1999 commented 1 year ago

OK明白,所以我理解的bart-baseline是没有加入句法知识的模型,只是用Bart做语法纠错的任务,是对的是吗?

HillZhang1999 commented 1 year ago

对的

sunbo1999 commented 1 year ago

另外task还是syntax-enhanced-translation,这个也没问题是吗

HillZhang1999 commented 1 year ago

没问题的,原有translation task不能用在baseline的原因是我们做了一些额外的优化,比如中文BART的词表以及source-token dropout等。

sunbo1999 commented 1 year ago

okok明白,感谢