facebookresearch / fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
MIT License
30.48k stars 6.41k forks source link

How to generate topK predictions using Levenstein Transformer? #5054

Open cyhhhhhh opened 1 year ago

cyhhhhhh commented 1 year ago

❓ Questions and Help

Before asking:

  1. search the issues.
  2. search the docs.

What is your question?

I want to generate topK translated sentences. I found in the documents that I could set --nbest to achieve this goal. However, I've tried and it still generated only one sentence when translating.

Code

fairseq-generate \ data-bin/Dataset \ --gen-subset test \ --task translation_lev \ --path checkpoints/checkpoint_best.pt \ --iter-decode-max-iter 9 \ --iter-decode-eos-penalty 0 \ --beam 1 --remove-bpe \ --print-step \ --batch-size 400 \ --results-path data-test \ --nbest 10

What have you tried?

I set --nbest to 10 but it didn't work.

What's your environment?

cyhhhhhh commented 1 year ago

I spent several days on this and only found out that this modal could not generate multiple predictions so as all the iterative-refine generators, which made me very upset. Why not state more clearly in the documentation or readme file? I think it's too general considering all the models and functions of fairseq, isn't it?