Closed jaysonnph closed 10 months ago
🤖
Hello,
Thank you for reaching out and providing detailed information about the issue you're facing.
Based on the context provided, it seems like the issue might be related to the formatting of the JSON blob returned by the LangChain framework. The framework is designed to always respond with a valid JSON blob of a single action. If it's returning human responses, it could be due to incorrect formatting or misuse of the 'action' and 'action_input' keys in the JSON blob.
The correct format to use is:
{
"action": "Final Answer",
"action_input": "Final response to human"
}
If the 'action' key is not set to "Final Answer", or if the 'action_input' key is not providing the final response, the framework might interpret it as a human response.
In the context shared, the 'ChatAnthropic' class in the 'anthropic.py' file is responsible for handling the chat with the large language models. It uses the 'AIMessage' class to generate responses. If the 'ChatAnthropic' class is not correctly generating the 'AIMessage' or if the 'AIMessage' is not correctly formatted, it could lead to unexpected results.
Please ensure that the JSON blob is correctly formatted and the 'action' and 'action_input' keys are used appropriately. Also, check the implementation of the 'ChatAnthropic' class and the 'AIMessage' class to ensure they are working as expected.
You can refer to the following files for more details:
Also, I noticed that you're using LangChain version 0.0.271. The latest version is 0.0.272. It might be worth updating to the latest version and see if the issue persists.
I hope this helps. If you have any further questions or if the issue persists, please feel free to provide more details so I can assist you better.
Best, Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@hwchase17 @agola11
Hi, @jaysonnph! I'm Dosu, and I'm helping the LangChain team manage our backlog. I wanted to let you know that we are marking this issue as stale.
Based on my understanding, the issue you reported was regarding the Chatbots
example in the Langchain library. It seems that the responses returned by the example include both language model responses and some human responses, which is not the expected behavior. @hwchase17 and @agola11 have been addressing the issue, and I provided a response suggesting that the issue might be related to incorrect formatting or misuse of the JSON blob returned by the LangChain framework. I recommended checking the implementation of the ChatAnthropic
class and the AIMessage
class.
Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days.
Thank you for your contribution to the LangChain repository, and please don't hesitate to reach out if you have any further questions or concerns!
System Info
Langchain version: 0.0.271 Platform: Ubuntu 20.04 Device: Nvidia-T4 Python version: 3.9.17
Who can help?
@hwchase17 @agola11
Information
Related Components
Reproduction
Expected behavior
I am following the
Chatbots
example here. It's not working as expected. The responses returned are weird that not just a single LLM response is there but also some human responses. What is happening there?