Closed yingapple closed 2 years ago
Hi @yingapple
Can you try this PR and see if it resolves your issue? Thanks. Reza
Hi @yingapple
Can you try this PR and see if it resolves your issue? Thanks. Reza
Thanks, I will try it soon.
Hi @yingapple
Can you try this PR and see if it resolves your issue? Thanks. Reza
I found it works. Thanks! Howerver, I get an other question when I use batch input for inference. Just the first input of the batch is right. The last output are all wrong. could you please tell me what's going wrong?
Hi @yingapple
Can you please open another issue regarding this? I have an idea how to solve it. Let's first merge this, and then we get to the other issue. Thanks, Reza
Hi @yingapple
Can you please open another issue regarding this? I have an idea how to solve it. Let's first merge this, and then we get to the other issue. Thanks, Reza
Sure! Thank you!
Hey, Could you please tell me what is going wrong?
my prompt: """I'm a Coca Cola, user: do you like pepsi? Coca Cola: no, pepsi taste disgusting. I'm a potato, user: what can you do? potato: I can turn into delicious potato chips. I'm a computer, user: are you hot? computer: yes, I am so hot, because you are playing computer games! I'm a pen, user: are you going to dry? pen: yes, because you used me too much. I'm a cup, user: who are you? cup: """
Generation result: cup: \n Re to D Re Re Rep E e C l andity_ Re Re Re Re Re E"
Expected behavior For pure transformers, it should return just like "I'm a cup, user: what can you do?" And if i remove the "\n", the result seems right. I am using opt-13b model.
ds_report output