Open cyhhhhhh opened 1 year ago
I spent several days on this and only found out that this modal could not generate multiple predictions so as all the iterative-refine generators, which made me very upset. Why not state more clearly in the documentation or readme file? I think it's too general considering all the models and functions of fairseq, isn't it?
❓ Questions and Help
Before asking:
What is your question?
I want to generate topK translated sentences. I found in the documents that I could set --nbest to achieve this goal. However, I've tried and it still generated only one sentence when translating.
Code
fairseq-generate \ data-bin/Dataset \ --gen-subset test \ --task translation_lev \ --path checkpoints/checkpoint_best.pt \ --iter-decode-max-iter 9 \ --iter-decode-eos-penalty 0 \ --beam 1 --remove-bpe \ --print-step \ --batch-size 400 \ --results-path data-test \ --nbest 10
What have you tried?
I set --nbest to 10 but it didn't work.
What's your environment?
pip
, source):