facebookresearch / fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
MIT License
30.57k stars 6.41k forks source link

score calculation in translation_moe #5566

Open zhmingzhaung opened 2 days ago

zhmingzhaung commented 2 days ago

❓ Questions and Help

Before asking:

  1. search the issues.
  2. search the docs.

What is your question?

for EXPERT in $(seq 0 2); do \
    cat wmt14-en-de.extra_refs.tok \
    | grep ^S | cut -f 2 \
    | fairseq-interactive $DATA_DIR/ --path $MODELDIR \
        --beam $BEAM_SIZE \
        --bpe subword_nmt --bpe-codes $BPE_CODE \
        --source-lang $SRC \
        --target-lang $TGT  \
        --task translation_moe --user-dir examples/translation_moe/translation_moe_src \
        --method hMoElp --mean-pool-gating-network \
        --num-experts 3 \
        --batch-size $DECODER_BS \
        --buffer-size $DECODER_BS --max-tokens 6000 \
        --remove-bpe \
        --gen-expert $EXPERT ; \
done > wmt14-en-de.extra_refs.tok.gen.3experts

python examples/translation_moe/score.py \
    --sys wmt14-en-de.extra_refs.tok.gen.3experts \
    --ref wmt14-en-de.extra_refs.tok

This is the command I used, as u see I add --remove-bpe but the output log still is:

That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
pairwise BLEU: 48.35
#refs covered: 2.63
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
average multi-reference BLEU (leave-one-out): 37.67

I don't know what should I do, please help me.

Code

What have you tried?

What's your environment?

zhmingzhaung commented 2 days ago

I saw Inconsistent input for score calculation in translation_moe [#2277](https://github.com/facebookresearch/fairseq/issues/2277), and add --remove-bpe, but it doesn't work. Maybe I need to change score.py?