Open cotaa956 opened 3 weeks ago
Sorry I can't reproduce this bug, can you provide more details?
I get this a lot TBH. Seems like it's a ChatGPT 4o mini issue, (for me) in SOME models of accuracy of response, but not others. From the logs. {"0": "bad escape \u at position 7884 (line 105, column 34)"}}}}, "downstream": ["Answer:PoorMapsCover"], "upstream": ["DuckDuckGo:SoftButtonsRefuse", "Wikipedia:WittyRiceLearn",
context
Sorry I can't reproduce this bug, can you provide more details?
Okay I will explain that I used the websearchassistant as a template agent and also the chatbot, the llm I used was deepseekchat and gemini .. whenever I ask a question to be retrieved from the knowledge base the llm response is " bad escape \m" like "according to the knowledge base, what questions can I ask" or something like that "what are the functions of the kidney according to the knowledge base " Here are the images for illustration
Describe your problem
In demo ragflow everytime I use the agent templates and run it, the first response goes well but later responses output is " bad escape"