gradio-app / gradio

Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!
http://www.gradio.app
Apache License 2.0
33.79k stars 2.56k forks source link

Clear button not working on chatbot #885

Closed kingabzpro closed 2 years ago

kingabzpro commented 2 years ago

Describe the bug

On Spaces the previous chat persist and sometime if you add too many chat logs it just breaks. I don't want my demo to break.

Reproduction

Just chat using: https://huggingface.co/spaces/kingabzpro/Rick_and_Morty_Bot and try pressing clear button.

Screenshot

image

Logs

No response

System Info

Spaces default, Edge, Windows

Severity

critical

abidlabs commented 2 years ago

Hi @kingabzpro thanks for creating this issue! We are going to take a look at it soon

abidlabs commented 2 years ago

Hi @kingabzpro, as far as I can tell, this should be working now! Here's an example of it in action: https://huggingface.co/spaces/abidlabs/chatbot-minimal

kingabzpro commented 2 years ago

It is not working, I have even used another browser (Chrome), incognito mode and stuff. I even refreshed the chat stays there no matter what. The clear button dosent work. image

kingabzpro commented 2 years ago

The link https://huggingface.co/spaces/abidlabs/chatbot-minimal to your app is no loading properly.

image

If I press clear and type again, there is a error

image

abidlabs commented 2 years ago

Thanks, taking a look again

abidlabs commented 2 years ago

Hi @kingabzpro you're right, something's up with my app. However, I just tested this with our pre-release gradio==2.9b7, and can confirm that the issue is fixed.

You can test this by doing:

pip install gradio==2.9b7

And then running a chatbot app like:

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")

def predict(input, history=[]):
    # tokenize the new input sentence
    new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')

    # append the new user input tokens to the chat history
    bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)

    # generate a response 
    history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id).tolist()

    # convert the tokens to text, and then split the responses into the right format
    response = tokenizer.decode(history[0]).split("<|endoftext|>")
    response = [(response[i], response[i+1]) for i in range(0, len(response)-1, 2)]  # convert to tuples of list
    return response, history

import gradio as gr

gr.Interface(fn=predict,
             theme="default",
             css=".footer {display:none !important}",
             inputs=["text", "state"],
             outputs=["chatbot", "state"]).launch()
abidlabs commented 2 years ago

Going to close this because I'm pretty confident that this should work, but if not, you know where to find us :)

Rogerspy commented 1 year ago

Hi @kingabzpro you're right, something's up with my app. However, I just tested this with our pre-release gradio==2.9b7, and can confirm that the issue is fixed.

You can test this by doing:

pip install gradio==2.9b7

And then running a chatbot app like:

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")

def predict(input, history=[]):
    # tokenize the new input sentence
    new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')

    # append the new user input tokens to the chat history
    bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)

    # generate a response 
    history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id).tolist()

    # convert the tokens to text, and then split the responses into the right format
    response = tokenizer.decode(history[0]).split("<|endoftext|>")
    response = [(response[i], response[i+1]) for i in range(0, len(response)-1, 2)]  # convert to tuples of list
    return response, history

import gradio as gr

gr.Interface(fn=predict,
             theme="default",
             css=".footer {display:none !important}",
             inputs=["text", "state"],
             outputs=["chatbot", "state"]).launch()

It's not working. My gradio version is 3.28.0