mckaywrigley / chatbot-ui

Come join the best place on the internet to learn AI skills. Use code "chatbotui" for an extra 20% off.
https://JoinTakeoff.com
MIT License
28.78k stars 8.01k forks source link

Dialog Context gets stuck #12

Closed markusstrasser closed 1 year ago

markusstrasser commented 1 year ago

After chatting back and forth a few times, the dialog context seems to freeze and the responses to an new query stay the same.

Seems there is an issue with the dialogue state? (GPT3.5 default).

mckaywrigley commented 1 year ago

If possible can you provide a screenshot and device? Having a tough time reproducing the bug. Thanks!

thomasleveil commented 1 year ago

same thing here, I can see the vertical scroll bar updating as more text in included in the page, but the page does not automatically scroll down and the page is frozen (cannot use the mouse wheel or arrow keys to scroll down).

Once the response is finished to be included in the page, the freeze disappears and we can scroll down to see the newly added text.

Note that you need a few interactions with the bot so that the full height of the page fills up and you need to scroll down.

(using Brave browser)

mckaywrigley commented 1 year ago

@markusstrasser Is this your issue as well, or is it actually affecting responses? Trying to clarify the different between these descriptions.

mckaywrigley commented 1 year ago

I changed scroll to "auto" from "smooth" which fixed the scroll issue (I think). @thomasleveil Let me know if that fixes the issue you had.

It sounds like @markusstrasser may have a different one so I'll keep this open until I confirm all issues are fixed!

markusstrasser commented 1 year ago

Sure. Screenshots attached. I'm on macOS venture. MBP M1.

My best guesses are that either

a) the temperature being set at 0 which might push to model to give deterministic responses after a while or b) the conversational context where the model is not receiving enough context from the conversation history. In the code (charLimit) clips the messages sent to the API.

I'll experiment a bit but it's probably closer to b) since I would assume openAI would mention the temperature thing in their docs or disincentivize setting it to 0.

In the code

` const charLimit = 12000; let charCount = 0; let messagesToSend = [];

for (let i = 0; i < messages.length; i++) {
  const message = messages[i];
  if (charCount + message.content.length > charLimit) {
    break;
  }
  charCount += message.content.length;
  messagesToSend.push(message);
}`

It seems that after a while you'd always send the first few messages to the API and break before the new messages are added. If the charLimit needs to be 12000 -- maybe loop over the list in reverse? So older instead of newer messages get clipped?

Also, do we know the default params that OpenAI uses for their 3.5 turbo chatGPT web UI? I'd love to set them as defaults as to compare behavior

Screenshot 2023-03-19 at 9 47 10 AM Screenshot 2023-03-19 at 9 47 25 AM Screenshot 2023-03-19 at 9 47 30 AM Screenshot 2023-03-19 at 9 47 35 AM Screenshot 2023-03-19 at 9 47 42 AM Screenshot 2023-03-19 at 9 47 47 AM
mckaywrigley commented 1 year ago

Thanks so much for this! We fixed the bug that was incorrectly setting the context. The new messages were being removed instead of added. If you pull you should see that fixed for you.

I'll be adding the ability to replace the default prompt with your own in the next 2 days, so you'll be able to play around with that.

Let me know if the bug fix we patched in fixes this.

markusstrasser commented 1 year ago

Thanks for the fix! Btw, is there a reason for the 4000 messageLimit or 12000 charLimit? I'm trying to find the settings so that the behavior is as close as possible to the one at chat.openai.com as to use the client as my main UI to chatGPT.

mckaywrigley commented 1 year ago

API limit is 4K tokens total. It’s about 1k tokens for the message, and 2-3k for context. Stretches it out as much as possible. I will have options for configuration this week. You’ll be able to customize it.