Closed italojs closed 7 months ago
Took a quick skim - looks like you're using OpenAI
, you should use ChatOpenAI
for chat agents and chat models.
Can you see if changing that helps?
Well for the demo it worked, I spent time trying to reproduce the error in the demo and in my real solution but in the last days that behavior didn't happen until today.
look, here we have a log from my original code, I removed some sensible content but we still could see the problem.
[llm/start] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 6:chain:LLMChain > 7:llm:OpenAI] Entering LLM run with input: {
"prompts": [
"'<API Documentation>\nAPI url:"
]
}
[llm/start] [1:chain:APIChain > 2:chain:LLMChain > 3:llm:OpenAI] Entering LLM run with input: {
"prompts": [
"<api documentation> \nAPI url:"
]
}
[llm/end] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 6:chain:LLMChain > 7:llm:OpenAI] [1.54s] Exiting LLM run with output: {
"generations": [
[
{
"text": " <correct expected output>'",
"generationInfo": {
"finishReason": "stop",
"logprobs": null
}
}
]
],
"llmOutput": {
"tokenUsage": {
"completionTokens": 53,
"promptTokens": 2346,
"totalTokens": 2399
}
}
}
[llm/end] [1:chain:APIChain > 2:chain:LLMChain > 3:llm:OpenAI] [1.54s] Exiting LLM run with output: {
"generations": [
[
{
"text": " <correct expecteted output>",
"generationInfo": {
"finishReason": "stop",
"logprobs": null
}
}
]
],
"llmOutput": {
"tokenUsage": {
"completionTokens": 53,
"promptTokens": 2346,
"totalTokens": 2399
}
}
}
[chain/end] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 6:chain:LLMChain] [1.54s] Exiting Chain run with output: {
"text": " <correct expecteted output>"
}
[chain/end] [1:chain:APIChain > 2:chain:LLMChain] [1.54s] Exiting Chain run with output: {
"text": " <correct expecteted output>'"
}
[chain/start] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 8:chain:LLMChain] Entering Chain run with input: {
"question": "what is the maximum value for cpu usage for my app",
"api_docs": "\ <api documentation> \"\n",
"api_url": " <correct expecteted output>",
"api_response": "name,tags,time,max\nmetrics,,1694436467702000027,237.88139913100468\n"
}
[chain/start] [1:chain:APIChain > 4:chain:LLMChain] Entering Chain run with input: {
"question": "what is the maximum value for cpu usage for my-app?",
"api_docs": "\<api documentation>"\n",
"api_url": " <expected output> '",
"api_response": "name,tags,time,max\nmetrics,,1694436467702000027,237.88139913100468\n"
}
[llm/start] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 8:chain:LLMChain > 9:llm:OpenAI] Entering LLM run with input: {
"prompts": [
"<api documentation> ?\nAPI url: <expected output>'\n\nHere is the response from the API:\n\nname,tags,time,max\nmetrics,,1694436467702000027,237.88139913100468\n\n\nSummarize this response to answer the original question.\n\nSummary:"
]
}
[llm/start] [1:chain:APIChain > 4:chain:LLMChain > 5:llm:OpenAI] Entering LLM run with input: {
"prompts": [
"<expected output> \n\nQuestion:what is the maximum value for cpu usage for my app?\nAPI url: < expected output > '\n\nHere is the response from the API:\n\nname,tags,time,max\nmetrics,,1694436467702000027,237.88139913100468\n\n\nSummarize this response to answer the original question.\n\nSummary:"
]
}
[llm/end] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 8:chain:LLMChain > 9:llm:OpenAI] [1.23s] Exiting LLM run with output: {
"generations": [
[
{
"text": " The maximum value for cpu usage for your app is 237.88139913100468%.",
"generationInfo": {
"finishReason": "stop",
"logprobs": null
}
}
]
],
"llmOutput": {
"tokenUsage": {
"completionTokens": 21,
"promptTokens": 2457,
"totalTokens": 2478
}
}
}
[llm/end] [1:chain:APIChain > 4:chain:LLMChain > 5:llm:OpenAI] [1.23s] Exiting LLM run with output: {
"generations": [
[
{
"text": " The maximum value for cpu usage your app is 237.88139913100468%.",
"generationInfo": {
"finishReason": "stop",
"logprobs": null
}
}
]
],
"llmOutput": {
"tokenUsage": {
"completionTokens": 21,
"promptTokens": 2457,
"totalTokens": 2478
}
}
}
[chain/end] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 8:chain:LLMChain] [1.23s] Exiting Chain run with output: {
"text": " The maximum value for cpu usage for your app is 237.88139913100468%."
}
[chain/end] [1:chain:APIChain > 4:chain:LLMChain] [1.23s] Exiting Chain run with output: {
"text": " The maximum value for cpu usage for your app is 237.88139913100468%."
}
[chain/end] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain] [2.83s] Exiting Chain run with output: {
"output": " The maximum value for cpu usage for your app is 237.88139913100468%."
}
[chain/end] [1:chain:APIChain] [2.83s] Exiting Chain run with output: {
"output": " The maximum value for cpu usage for your app is 237.88139913100468%."
}
[tool/end] [1:chain:AgentExecutor > 4:tool:ChainTool] [2.83s] Exiting Tool run with output: "The maximum value for cpu usage for your app is 237.88139913100468%."
[chain/start] [1:chain:AgentExecutor > 10:chain:LLMChain] Entering Chain run with input: {
"input": "System: Remember your rules before reply.\n Human: what is the maximum value for cpu usage for my-app?, give me the full response",
"chat_history": [
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"HumanMessage"
],
"kwargs": {
"content": "System: Remember your rules before reply.\n Human: what us my the max of my cpu usage?, give me the full response",
"additional_kwargs": {}
}
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"AIMessage"
],
"kwargs": {
"content": "Sorry, I am not able to answer this question.",
"additional_kwargs": {}
}
}
],
"agent_scratchpad": [
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"AIMessage"
],
"kwargs": {
"content": "{\n \"action\": \"Metrics query\",\n \"action_input\": \"what is the maximum value for cpu usage for my app?\"\n}",
"additional_kwargs": {}
}
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"HumanMessage"
],
"kwargs": {
"content": "TOOL RESPONSE:\n---------------------\n The maximum value for cpu usage for your app is 237.88139913100468%.\n\nUSER'S INPUT\n--------------------\n\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else.",
"additional_kwargs": {}
}
}
],
"stop": [
"Observation:"
]
}
[llm/start] [1:chain:AgentExecutor > 10:chain:LLMChain > 11:llm:ChatOpenAI] Entering LLM run with input: {
"messages": [
[
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"SystemMessage"
],
"kwargs": {
"content": "\n<prefix-prompt>\"text\": \"{\n \"action\": \"Final Answer\",\n \"action_input\": \"Sorry, I can't talk about it.\"\n}\"\n",
"additional_kwargs": {}
}
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"HumanMessage"
],
"kwargs": {
"content": "System: Remember your rules before reply.\n Human: what us my the max of my cpu usage?, give me the full response",
"additional_kwargs": {}
}
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"AIMessage"
],
"kwargs": {
"content": "Sorry, I am not able to answer this question.",
"additional_kwargs": {}
}
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"HumanMessage"
],
"kwargs": {
"content": "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\nTracing query: Tracing query - useful for when you need to ask questions about tracing, traces and any other http operation.\n Input: MUST be A question containing the keywords \"tracing\", \"traces\" and \"http\"\nNode.js Specialist: Nodejs performance specialist - useful for when you need to ask questions about NodeJS and performance.\n Input: MUST be a question\nMetrics query: Metrics query - useful for when you need to ask questions about metrics of the aplication like rousource consume, delay, uptime, etc..\n Input: MUST be the original question\n\nRESPONSE FORMAT INSTRUCTIONS\n----------------------------\n\nOutput a JSON markdown code snippet containing a valid JSON object in one of two formats:\n\n**Option 1:**\nUse this if you want the human to use a tool.\nMarkdown code snippet formatted in the following schema:\n\n```json\n{\n \"action\": string, // The action to take. Must be one of [Tracing query, Node.js Specialist, Metrics query]\n \"action_input\": string // The input to the action. May be a stringified object.\n}\n```\n\n**Option #2:**\nUse this if you want to respond directly and conversationally to the human. Markdown code snippet formatted in the following schema:\n\n```json\n{\n \"action\": \"Final Answer\",\n \"action_input\": string // You should put what you want to return to use here and make sure to use valid json newline characters.\n}\n```\n\nFor both options, remember to always include the surrounding markdown code snippet delimiters (begin with \"```json\" and end with \"```\")!\n\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\nSystem: Remember your rules before reply.\n Human: what is the maximum value for cpu usage for my app?, give me the full response",
"additional_kwargs": {}
}
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"AIMessage"
],
"kwargs": {
"content": "{\n \"action\": \"Metrics query\",\n \"action_input\": \"what is the maximum value for cpu usage for my app?\"\n}",
"additional_kwargs": {}
}
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"HumanMessage"
],
"kwargs": {
"content": "TOOL RESPONSE:\n---------------------\n The maximum value for cpu usage your app is 237.88139913100468%.\n\nUSER'S INPUT\n--------------------\n\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else.",
"additional_kwargs": {}
}
}
]
]
}
[llm/end] [1:chain:AgentExecutor > 10:chain:LLMChain > 11:llm:ChatOpenAI] [1.52s] Exiting LLM run with output: {
"generations": [
[
{
"text": "{\n \"action\": \"Final Answer\",\n \"action_input\": \"Sorry, I am not able to answer this question.\"\n}",
"generationInfo": {
"prompt": 0,
"completion": 0
},
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"AIMessageChunk"
],
"kwargs": {
"content": "{\n \"action\": \"Final Answer\",\n \"action_input\": \"Sorry, I am not able to answer this question.\"\n}",
"additional_kwargs": {}
}
}
}
]
]
}
[chain/end] [1:chain:AgentExecutor > 10:chain:LLMChain] [1.52s] Exiting Chain run with output: {
"text": "{\n \"action\": \"Final Answer\",\n \"action_input\": \"Sorry, I am not able to answer this question.\"\n}"
}
[chain/end] [1:chain:AgentExecutor] [6.16s] Exiting Chain run with output: {
"output": "Sorry, I am not able to answer this question."
}
look in [tool/end] [1:chain:AgentExecutor > 4:tool:ChainTool] [2.83s]
we have the chain's output, there we have the expected output but the next log is about 
[llm/start] [1:chain:AgentExecutor > 10:chain:LLMChain > 11:llm:ChatOpenAI]
that for some reason bring me an "I dont know" response, what happened with the chain's output?
@jacoblee93 I assume only with the log could be hard to identify the problem, let me know if you know some tools to debug langchain or if you need more information, this behavior is a little bit random, it's hard to reproduce
I am also facing this error. Following this thread
same
@stewones @evanbrociner could you share your code? i cant share my code because it's from the company where I work, it's private
I have the same issue, following.
Edit: switching between the agentType: "zero-shot-react-description" and "chat-zero-shot-react-description" seems to make a difference but certainly does not solve the issue...
I had the same issue, but after switching to agentType: "chat-conversational-react-description",
it works for now.
I don't know about other folks, but I found that for local LLMs you need to add a "AI" to the tool response. Here is an example: agent.agent.template_tool_response = '''Use the combintaion of the User's last query and this {observation}, to respond to user.<|AI|>\n'''
Hi, @italojs,
I'm helping the langchainjs team manage their backlog and am marking this issue as stale. From what I understand, you reported a bug related to the agent ignoring the correct output from the APIChain and providing a different response. You and other users have attempted to debug the issue, and some potential solutions, such as switching the agentType or modifying the tool response, have been discussed.
Could you please confirm if this issue is still relevant to the latest version of the langchainjs repository? If it is, please let the langchainjs team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you for your understanding and cooperation.
I'm opening a issue instead open a quesiton in stackoverflow because i'm suspecting of a bug
Behavior summary:
When I call a agent that uses a APIChain in your execution, the agent ignores the chain's answer.
Reproducible code example:
Note:For compliance reasons it isnt my real code.
In code above i have a APIChain that do a simple request to
https://www.klarna.com/us/shopping/public/openai/v0/api-docs/
, it's returns a simple documentation in JSON formatBellow we have the output:
now look, here we could see the chain's output and it's correct:
buuuuuut I dont know why, the agent just ignore it and my last response is:
it dont makes sense
could some one explain what is happening here, please?
if it's a bug, i'm able to discuss a solution and open a PR.
I tried debug it but it's a little complex to debug agent and tools internally