facebookresearch / fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
MIT License
30.33k stars 6.39k forks source link

AttributeError: module 'sacrebleu' has no attribute 'compute_bleu' in training translation with quant_noise #3836

Open HamidShojanazeri opened 3 years ago

HamidShojanazeri commented 3 years ago

🐛 Bug

The training of the Translation model fails with

AttributeError: module 'sacrebleu' has no attribute 'compute_bleu'

To Reproduce

Follow the example Data preps and then run

CUDA_VISIBLE_DEVICES=0 fairseq-train     data-bin/iwslt14.tokenized.de-en     --arch transformer_iwslt_de_en --share-decoder-input-output-embed     --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0     --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000     --dropout 0.3 --weight-decay 0.0001     --criterion label_smoothed_cross_entropy --label-smoothing 0.1     --max-tokens 4096     --eval-bleu     --eval-bleu-args '{"beam": 5, "max_len_a": 1.2, "max_len_b": 10}'     --eval-bleu-detok moses     --eval-bleu-remove-bpe     --eval-bleu-print-samples     --best-checkpoint-metric bleu --maximize-best-checkpoint-metric     --quant-noise-pq 0.1 --quant-noise-pq-block-size 8  --save-dir checkpoints/fconv_quant_noise --max-epoch 1

Steps to reproduce the behavior (always include the command you ran):

Suggested Fix

The 'compute_bleu' lives in "sacrebleu.metrics.BLEU" Here in translation script:

from sacrebleu.metrics import BLEU

and change lines #423 and #428 accordingly :

def compute_bleu(meters):
                    import inspect
                    import sacrebleu
                    fn_sig = inspect.getfullargspec(BLEU.compute_bleu)[0]
                    if "smooth_method" in fn_sig:
                        smooth = {"smooth_method": "exp"}
                    else:
                        smooth = {"smooth": "exp"}
                    bleu = BLEU.compute_bleu(
                        correct=meters["_bleu_counts"].sum,
                        total=meters["_bleu_totals"].sum,
                        sys_len=meters["_bleu_sys_len"].sum,
                        ref_len=meters["_bleu_ref_len"].sum,
                        **smooth
                    )
                    return round(bleu.score, 2)
lorelupo commented 2 years ago

That is due to version incompatibility that you can fix with pip uninstall sacrebleu; pip install sacrebleu==1.5.1

andongBlue commented 10 months ago

That is due to version incompatibility that you can fix with pip uninstall sacrebleu; pip install sacrebleu==1.5.1 Thx, It works for me!