Open maxyousif15 opened 6 months ago
@maxyousif15 : it looks like the LLM is not stopping correctly. @trebedea can you try to reproduce?
Hi @maxyousif15 !
I have not been able to replicate this bug. I have run with the configs you provided and the streaming example demo that I also assume you are also using and I always receive a normal bot message. I've run 50 times, just to ensure that it's not something that arises from time to time, mainly due to the high temperature (I've seen you use temperature=1
for bot message generation).
Can you share additional details to be able to replicate? Is the problem still there?
I have noticed that the output of the
LLMRails
object occasionally includes content which seemingly is related to predicted next question by the user, or chain of thought outcomes. Below is a screenshot to highlight this issue. Is there any guidance on this?I expect the LLM to respond to the question, without including all the other additional content. It's extremely hard to debug what is going on.
I've also included my
config.yml
and an examplerails.co
file below.config.yml
rails.co
Any insights would be much appreciated.