EniasCailliau / GirlfriendGPT

AI Girlfriend NSFW chatbot tool in Python - Build your own AI companion in Python using ChatGPT.
https://gptgirlfriend.online
2.6k stars 441 forks source link

An error happened while creating a response: Could not parse LLM output: `Thought: Do I need to use a tool? No #3

Closed jonchui closed 1 year ago

jonchui commented 1 year ago

1) I'm noticing this happens SOMETIMES, when creating my own bot?

STEPS:

  1. query is something basic like I just redployed you, Amazing! say that again, Does release candidate 41 work or not - who are you? and what do you do ?, etc.
    • however it's not reproducible (which could be chatGPT), b/c when i repeat some of the text above, i don't see the error happened line..
An error happened while creating a response: Could not parse LLM output: `Thought: Do I need to use a tool? No

Hello! Oh dear, it seems like there's an error. I'm sorry to hear that. Maybe if we shorten the messages or completion, we can get rid of that error message. Let's try that! Remember, if you ever need help or have questions, feel free to ask me. How can I assist you today?`

But... Why does it only show sometimes?

2) Also after using it a while, I was getting this error constantly:

"An error happened while creating a response: [ERROR - POST /generate] This model's maximum context length is 4097 tokens. However, you requested 4177 tokens (3921 in the messages, 256 in the completion). Please reduce the length of the messages or completion.

My gut is that to keep context, you keep passing the history into the convo? ?

See this convo:

image

Temp fix for 2) -> I can redploy with python deploy.py and it goes away ...

jonchui commented 1 year ago

Got it again just now :

image

EniasCailliau commented 1 year ago

Looks like you're hitting the limits of the LLM.

  1. The LLM sometimes ignores instructions and produces output that the parser doesn't understand. When that happens the Agent is set to raise an exception.

  2. You're right, the history is passed into the LLM prompt without any clipping. I will clip the history before passing it into the prompt as a temporary fix.