XiangLi1999 / Diffusion-LM

Diffusion-LM
Apache License 2.0
1.05k stars 135 forks source link

Infilling and text generation in specific contexts. #36

Open radi-cho opened 2 years ago

radi-cho commented 2 years ago

Hello, thank you for the great work. Can someone guide me on how to test the infilling task - give surrounding context and expect a sentence in the middle to be generated? Also, I would be thankful if you showcase how the length of the generated response could be controlled.

radi-cho commented 2 years ago

Separately, I would like to ask for the opinion of the authors if such an approach could be applied to a more structured conversational problem with a corresponding dataset? For example, using the approach in chat systems where the history of a conversation is provided and a new utterance is sampled with the infilling procedure. The goal would be to derive a more controllable response generation.

XiangLi1999 commented 2 years ago

Hi, thanks for the questions!

re1: python scripts/infill.py --model_path diff_e2e-tgt_pad_rand16_transformer_lr0.0001_0.0_2000_sqrt_Lsimple_h128_s2_d0.1_sd102_xstart/ema_0.9999_200000.pt --batch_size 50 --partial_seq "START The Eagle is a PAD PAD PAD shop located in the city centre area PAD PAD King . Although price PAD PAD low at less than 20 pounds , it serves English food , with an PAD customer rating . END" --eval_task_ infill

You can do infilling by passing in partial_seq and use PAD in place of tokens you want to infill.

re2: python scripts/infill.py --model_path {diff_e2e-tgt_block_rand16_transformer_lr0.0001_0.0_2000_sqrt_Lsimple_h128_s2_d0.1_sd102_xstart/ema_0.9999_200000.pt} --batch_size 50 --partial_seq "START" --eval_task_ length --tgt_len 10 --out_dir {your output_dir}

re the dialog problem: I think you dont need infilling to generate the next utterance. Infilling is particularly useful if you have left and right context but want to generate something in the middle. You can simply use the unconditional generation part, and control that process. (similar to the classifier guided control experiments in the paper, except that you have conditioning over all prior utterances)

radi-cho commented 2 years ago

@XiangLi1999 @ChorlingLau How can I run conditioning on previous sequences without infilling?

yulinchen99 commented 2 years ago

I think conditioning on previous sequences using "infilling" is also fine. It is just that your input is changed from left_context [PAD] [PAD]... [PAD] right_context to left_context [PAD] [PAD] ... [PAD]. Meanwhile I feel there may be a need to fine-tune the trained diffusion (or probably re-train one using customed dataset ?) since conversation text is different from e2e and rocstory. Also, I think it is the number of PAD tokens that controls the length of generated text. In their experiment they seem to use a max length of 10 in infilling task.

https://github.com/XiangLi1999/Diffusion-LM/blob/759889d58ef38e2eed41a8c34db8032e072826f4/improved-diffusion/scripts/infill.py#L83

Besides, I am quite intrigued with the fact that their model can perform quite satisfactorily on infilling task without any further finetuning, as in the original training process always the whole sequence is noised but in infilling task, the input is only partially noised at each step.

radi-cho commented 2 years ago

@XiangLi1999 @cyl628 Can I specify left context when using the eval_task "length"?