Open zwhe99 opened 3 years ago
This issue has been automatically marked as stale. If this issue is still affecting you, please leave any comment (for example, "bump"), and we'll keep it open. We are sorry that we haven't been able to prioritize it yet. If you have any new additional information, please include it with your comment!
Hello, i have the same question, Do you have an answer to the question now?
❓ Questions and Help
What is your question?
When I execute the following command, it prompts: | WARNING | fairseq_cli.generate | BLEU score is being computed by splitting detokenized string on spaces, this is probably not what you want. Use --sacrebleu for standard 13a BLEU tokenization
What‘s the difference between --scoring 'sacrebleu' and --sacrebleu?
Code
fairseq-generate /apdcephfs/share_916081/timurhe/dataset/mbart-wmt18-zhen \ --path /apdcephfs/share_916081/timurhe/workspaces/GraduationProject/mbart25-ft-wmt18-zhen/ckpts/checkpoint_best.pt \ --user-dir /apdcephfs/share_916081/timurhe/pack-fairseq/myfairseq \ --task translation_from_pretrained_bart \ --gen-subset test \ -t $trg -s $src \ --bpe 'sentencepiece' --sentencepiece-model /apdcephfs/share_916081/ychao/mbart.cc25/sentence.bpe.model \ --scoring 'sacrebleu' --remove-bpe 'sentencepiece' \ --sacrebleu \ --batch-size 64 --langs $langs | tee gen2 /apdcephfs/share_916081/timurhe/pack-fairseq/gen2
#### What have you tried? #### What's your environment? - fairseq Version (e.g., 1.0 or master):0.10.2 - PyTorch Version (e.g., 1.0):1.6 - OS (e.g., Linux):Linux - How you installed fairseq (`pip`, source): pip install -e fairseq - Build command you used (if compiling from source): - Python version:3.7 - CUDA/cuDNN version: - GPU models and configuration: - Any other relevant information: