grammarly / gector

Official implementation of the papers "GECToR – Grammatical Error Correction: Tag, Not Rewrite" (BEA-20) and "Text Simplification by Tagging" (BEA-21)
Apache License 2.0
891 stars 216 forks source link

Reproducing experiments and finding different scores after Stage 1 #182

Open suttergustavo opened 1 year ago

suttergustavo commented 1 year ago

Hello,

I've been trying to reproduce the results presented on the paper with the provided code, but the result that I obtained is (slightly) different from the ones provided after Stage I. Those are my results on BEA-2019

Model Precision Recall F0.5
RoBERTa from the paper (Table 10) 40.8 22.1 34.9
RoBERTa from my run 42.7 19.8 34.7

It was mentioned in previous issues that your best model is from epoch 18 on Stage 1, but my best epoch was epoch 16. In addition, my training was considerably faster than the one reported by you on other issues, taking 2.5 days on one RTX 6000.

I question whether these differences should be expected given the randomness in initialization and data order, or maybe there's something wrong with how I'm running the code.

Please find my training command:

python3 train.py --train_set=../PIE/a1/a1_train.gector \
                 --dev_set=../PIE/a1/a1_val.gector \
                 --model_dir="$ckpt" \
                 --cold_steps_count=2 \
                 --accumulation_size=4 \
                 --updates_per_epoch=10000 \
                 --tn_prob=0 \
                 --tp_prob=1 \
                 --transformer_model=roberta \
                 --special_tokens_fix=1 \
                 --tune_bert=1 \
                 --skip_correct=1 \
                 --skip_complex=0 \
                 --n_epoch=20 \
                 --patience=3 \
                 --max_len=50 \
                 --batch_size=64 \
                 --tag_strategy=keep_one \
                 --cold_lr=1e-3 \
                 --lr=1e-5 \
                 --predictor_dropout=0.0 \
                 --lowercase_tokens=0 \
                 --pieces_per_token=5 \
                 --vocab_path=data/output_vocabulary \
                 --label_smoothing=0.0

Thank you for you time :)