Closed xhluca closed 3 years ago
Hi @xhlulu. This is a known issue that transformers tend to copy things. My suggestion would be that you can try either doing random generation with higher temperature, or adding a repetition penalty to the generation to explicitly penalize repeating the context.
Thank you! Is that possible with huggingface's transformers
, or would I need to use the original model (in this repo)?
They are the same model, so either is fine. You can use huggingface's decoding script for GPT-2 and change it a bit to adapt to DialoGPT. There you should be able to tweak the temperature or add repetition penalty.
Awesome, thanks for the advice! I'll try out this decoding script and close this issue if no problems arise.
@xhlulu could workaround the problem? I'm experiencing exactly the same issue
I tinkered a bit w the temperature and repetition_penalty parameters and got decent results, this is my code:
for step in range(50):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id, temperature=0.6, repetition_penalty=1.3)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
Thanks for sharing! I'll try this with my own bot
I just tried your method, as well as the top-p/top-k method from the huggingface tutorial. Here are the results.
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt').to('cuda')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
Result:
>> User:Today, I bought some ice cream.
DialoGPT: I'm going to buy some ice cream.
>> User:Ok, but hwy?
DialoGPT: I'm going to buy some ice cream.
>> User:Fine. You already said that
DialoGPT: I'm going to buy some ice cream.
>> User:Ok, goodbye!
DialoGPT: I'm going to buy some ice cream.
>> User:Stop
DialoGPT: I'm going to buy some ice cream.
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt').to('cuda')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id, temperature=0.6, repetition_penalty=1.3)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
Result:
>> User:Today, I bought some ice cream.
DialoGPT: I'm going to buy a boat.
>> User:Wow that was random! but happy for you
DialoGPT: It's not like he has anything else to do!
>> User:What has what?
DialoGPT: A boat?
>> User:You said he?
DialoGPT: No, I didn't say him. He is the one who said it.
>> User:Ok fine. Whatever you say
DialoGPT: He's just saying that because of his name.
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt').to('cuda')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids,
do_sample=True,
max_length=1000,
top_k=50,
top_p=0.95,
pad_token_id=tokenizer.eos_token_id
)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
Result:
>> User:Today, I bought some ice cream.
DialoGPT: Me too! :D
>> User:Nice! What brand did you buy?
DialoGPT: Strawberry and Vanilla
>> User:That's not a brand!
DialoGPT: Yup :P
>> User:Ok fine, anyway. What else did you do?
DialoGPT: I ate candy bars
>> User:cool! Were they good?
DialoGPT: They were. It was kinda like a snickerdoodle from my younger years.
Hy @xhlulu
I am also encountering the same issue of repeated utterance and tired the above mentioned solution (with different combinations of parameters), but utterances generated are still same. I am using a self-generated data to train the model.
I have attached few screenshots for your reference
@saleemsum I recommend looking at the loss on the reddit dataset to check if there's catastrophic forgetting. If the model originally output good response but now struggles, then something wrong probably happened during training.
I am trying to train a transformer-based model, but my model always generates the same word as follows
and my code is here
How can I fix it?
im also trying to use DialoGPT model, I fine-tuned it with a dataset related to my task. here is an example of what I went through.
User: im happy
Psychobot: im happy�'m feeling really down and hopeless. What can I do to feel better? What can I do? What can I do? What can I do? What can I do? What can I do? What can I do? What can I do? What can I do? What can I do? What can I do? What can I do? What can I do? What can I do? What can I do? What can I do? What can I do? What can I do? What can I do? What can I do? What can I do? What can I do???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m?'m????'m?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????'m not?'m not?'m not?'m not?'m?'m?'m?'m?'m?'m'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm'm
how can I solve this issue? should I repeat the fine-tune process? or use another dataset? this is the link of the dataset I used to fine-tune DialoGPT. https://huggingface.co/datasets/jkhedri/psychology-dataset although I dropped the last column because it generates bad answers.
I tried running the large model (in a colab notebook) using the approach described in the model card from the huggingface library:
Unfortunately, the output seems to quickly repeat the same sentence over and over again. In example 1 and 2, it repeats the same sentence from the beginning. In the 3rd case, the model starts fine, but as the conversation progresses, it starts repeating the same thing (or parts of it).
Is that an intended behavior?
Example 1
Example 2
Example 3