Closed Olivia-fsm closed 3 years ago
For multiple distractors, you need to enable beam-search on model.generate, you can check this for further detail: https://huggingface.co/transformers/main_classes/model.html#transformers.generation_utils.GenerationMixin.generate
Besides, you may also need to set diversity_penalty to avoid distractors being too similar. Or using Jaccard similarity to filter the result using our implementation, as following: https://github.com/voidful/BDG/blob/main/BDG_selection.ipynb
I have try the pretrained models post on the HuggingFace, however, I can only get one distractor for each input sequence. According to the paper, it would learn from previous generated distractor candidates, and produce multiple distractors.
How can I achieve that?