Closed GlanceQwQ closed 6 months ago
Hey @GlanceQwQ, great to see you around again! Hope you're ready to dive back into some quirky Langchain-Chatchat adventures 🚀
Based on the information you've provided and the similar issues I've found, it seems like the problem might be related to how the model is being loaded in Langchain-Chatchat, specifically when using the chains.llmchain_with_history
code.
In a similar issue chatglm3-6b问题, the user was experiencing a similar problem where the model was talking to itself. The solution in that case was to adjust the way the model was being loaded and used.
It's possible that the issue you're experiencing is related to the way the ChatGLM3-6B model merged with lora is being loaded and used in Langchain-Chatchat. You mentioned that the issue does not occur when you're using the original ChatGLM3-6B model. This suggests that the problem might be specific to the merged model.
One possible solution could be to try loading and using the merged model in the same way as the original ChatGLM3-6B model. If the issue persists, it might be necessary to look into the specifics of how the merged model is structured and how it interacts with Langchain-Chatchat.
The issue you're encountering with Langchain-Chatchat, where the system unexpectedly appends "<|user|>" to the answer and then generates additional responses instead of stopping after a single response, is likely due to the behavior of the output parser and how it handles the termination of responses.
To resolve this issue, you should adjust the output parsing logic to ensure that it properly identifies the end of a response and prevents further response generation. This can be achieved by modifying the _stop
property of the StructuredGLM3ChatAgent
class to include the "<|user|>" token as a stop signal. This way, the system recognizes "<|user|>" as an indication to stop generating further responses.
Here's how you can modify the _stop
property:
@property
def _stop(self) -> List[str]:
return ["<|observation|>", "<|user|>"]
By adding "<|user|>" to the _stop
list, you're explicitly telling the system that encountering this token should signal the end of the response generation process. This should prevent the system from appending "<|user|>" to the answer and generating additional responses beyond what is expected.
Please note that this is a potential solution based on the information available and the similar issues I've found. If the issue persists after trying this solution, please provide more details about your environment (such as the exact version of Langchain-Chatchat you're using, your operating system and version, your Python version, and any other relevant environment information), and I'll be glad to assist you further.
For more information, you can refer to the source code of the StructuredGLM3ChatAgent
class in the server/agent/custom_agent/ChatGLM3Agent.py file.
看一下依赖和你GLM的版本问题
这个问题已经被标记为 stale
,因为它已经超过 30 天没有任何活动。
问题描述 / Problem Description 用简洁明了的语言描述这个问题 / Describe the problem in a clear and concise manner. 询问一个问题后,他会自动的在答案后面加上<|user|>…… ,然后继续回答生成的这个问题。
复现问题的步骤 / Steps to Reproduce
预期的结果 / Expected Result 描述应该出现的结果 / Describe the expected result. 应该回答完成我的问题就结束了,为什么继续回答? 实际结果 / Actual Result 描述实际发生的结果 / Describe the actual result.
环境信息 / Environment Information
附加信息 / Additional Information 添加与问题相关的任何其他信息 / Add any other information related to the issue.