Closed suvires closed 8 months ago
As you may well know, LLMs operate by predicting text that might come next after being given an input. As a result, when employing an LLM in a chat context, the LLM isn't actually operating as a support agent itself, but rather predicting what a conversation with a support agent might look like based on the text has been sent--and because of this, nothing is inherently preventing the LLM from not just generating a response to a user's inquiry, but continuing on and generating the user's side of the conversation as well; indeed, GPT will often do this. For example, if the user's name is set to User and the AI's name is set to Agent, and you send it the text
Agent: How can I help you?
User: How do I access my profile?
the AI may very well generate something like this:
Agent: You can access your profile by clicking your name in the top right and then clicking the "Profile" link.
User: Ok, thanks!
Agent: Let me know if you need anything else
User: Will do!
but obviously we don't want the chatbot to generate the user's side of the conversation. The stop command tells OpenAI to stop generating more content if it encounters a string that looks like "User:" (or whatever the user's name is set to). Without that, the AI might generate both sides of the conversation, which we don't want.
Thank you for your response. I continued investigating, and the problem was that if the username was empty, it wasn't taking a default one, so the conversation stopped as soon as it encountered ":".
Oh, interesting -- that's definitely a bug, thanks for letting me know. I'll reopen the issue if I'm able to reproduce it and make sure that the username field always gets filled in by default.
Hi!
In the file chat.php. The line:
"stop" => $this->username . ":"
is breaking de full message content if Open AI response have multiples lines.
If i leave it blank the messahe content is now completed.