langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.32k stars 2.08k forks source link

agent ignoring chain's ansewer, is it a bug? #2576

Closed italojs closed 7 months ago

italojs commented 1 year ago

I'm opening a issue instead open a quesiton in stackoverflow because i'm suspecting of a bug

Behavior summary:

When I call a agent that uses a APIChain in your execution, the agent ignores the chain's answer.

Reproducible code example:

Note:For compliance reasons it isnt my real code.

(async () => {
  const { ChatOpenAI } = require('langchain/chat_models/openai')
  const { initializeAgentExecutorWithOptions } = require('langchain/agents')
  const { Calculator } = require('langchain/tools/calculator')
  const { ChainTool } = require('langchain/tools')
  const { PromptTemplate } = require('langchain/prompts')
  const { LLMChain } = require('langchain/chains')
  const { OpenAI } = require("langchain/llms/openai");
  const { APIChain } = require("langchain/chains");

  const OPEN_METEO_DOCS = `
        API Documentation

        ENDPOINT 
        The API endpoint https://www.klarna.com/us/shopping/public/openai/v0/api-docs/ returns the klarna's documentation
    `;

  const verbose = true

  const model = new OpenAI({ modelName: "gpt-3.5-turbo", temperature: 0  });
  const chain = APIChain.fromLLMAndAPIDocs(model, OPEN_METEO_DOCS, {
       verbose
  });

  const tools = [
    new Calculator(),
    new ChainTool({
      name: "request",
      description:
        "Request - useful for when you need to ask questions about klarna requests.",
      chain,
      returnDirect: false,
    })
  ]

  const executor = await initializeAgentExecutorWithOptions(tools, model, {
    agentType: 'chat-conversational-react-description',
    verbose,
  })

   const result = await executor.call({input: 'make a request to klarna base endpoint and bring me the openapi version'})

    console.log(result.output)
})()

In code above i have a APIChain that do a simple request to https://www.klarna.com/us/shopping/public/openai/v0/api-docs/, it's returns a simple documentation in JSON format

Bellow we have the output:

[chain/start] [1:chain:AgentExecutor] Entering Chain run with input: {
  "input": "make a request to klarna base endpoint and tell me what you receive",
  "chat_history": []
}
[chain/start] [1:chain:AgentExecutor > 2:chain:LLMChain] Entering Chain run with input: {
  "input": "make a request to klarna base endpoint and tell me what you receive",
  "chat_history": [],
  "agent_scratchpad": [],
  "stop": [
    "Observation:"
  ]
}
[llm/start] [1:chain:AgentExecutor > 2:chain:LLMChain > 3:llm:OpenAIChat] Entering LLM run with input: {
  "prompts": [
    "[{\"lc\":1,\"type\":\"constructor\",\"id\":[\"langchain\",\"schema\",\"SystemMessage\"],\"kwargs\":{\"content\":\"Assistant is a large language model trained by OpenAI.\\n\\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\\n\\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\\n\\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. However, above all else, all responses must adhere to the format of RESPONSE FORMAT INSTRUCTIONS.\",\"additional_kwargs\":{}}},{\"lc\":1,\"type\":\"constructor\",\"id\":[\"langchain\",\"schema\",\"HumanMessage\"],\"kwargs\":{\"content\":\"TOOLS\\n------\\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\\n\\ncalculator: Useful for getting the result of a math expression. The input to this tool should be a valid mathematical expression that could be executed by a simple calculator.\\nrequest: Request - useful for when you need to ask questions about klarna requests.\\n\\nRESPONSE FORMAT INSTRUCTIONS\\n----------------------------\\n\\nOutput a JSON markdown code snippet containing a valid JSON object in one of two formats:\\n\\n**Option 1:**\\nUse this if you want the human to use a tool.\\nMarkdown code snippet formatted in the following schema:\\n\\n```json\\n{\\n    \\\"action\\\": string, // The action to take. Must be one of [calculator, request]\\n    \\\"action_input\\\": string // The input to the action. May be a stringified object.\\n}\\n```\\n\\n**Option #2:**\\nUse this if you want to respond directly and conversationally to the human. Markdown code snippet formatted in the following schema:\\n\\n```json\\n{\\n    \\\"action\\\": \\\"Final Answer\\\",\\n    \\\"action_input\\\": string // You should put what you want to return to use here and make sure to use valid json newline characters.\\n}\\n```\\n\\nFor both options, remember to always include the surrounding markdown code snippet delimiters (begin with \\\"```json\\\" and end with \\\"```\\\")!\\n\\n\\nUSER'S INPUT\\n--------------------\\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\\n\\nmake a request to klarna base endpoint and tell me what you receive\",\"additional_kwargs\":{}}}]"
  ]
}
[llm/end] [1:chain:AgentExecutor > 2:chain:LLMChain > 3:llm:OpenAIChat] [19.97s] Exiting LLM run with output: {
  "generations": [
    [
      {
        "text": "Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. However, above all else, all responses must adhere to the format of RESPONSE FORMAT INSTRUCTIONS.\n\nTOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\ncalculator: Useful for getting the result of a math expression. The input to this tool should be a valid mathematical expression that could be executed by a simple calculator.\nrequest: Request - useful for when you need to ask questions about klarna requests.\n\nRESPONSE FORMAT INSTRUCTIONS\n----------------------------\n\nOutput a JSON markdown code snippet containing a valid JSON object in one of two formats:\n\n**Option 1:**\nUse this if you want the human to use a tool.\nMarkdown code snippet formatted in the following schema:\n\n```json\n{\n    \"action\": string, // The action to take. Must be one of [calculator, request]\n    \"action_input\": string // The input to the action. May be a stringified object.\n}\n```\n\n**Option #2:**\nUse this if you want to respond directly and conversationally to the human. Markdown code snippet formatted in the following schema:\n\n```json\n{\n    \"action\": \"Final Answer\",\n    \"action_input\": string // You should put what you want to return to use here and make sure to use valid json newline characters.\n}\n```\n\nFor both options, remember to always include the surrounding markdown code snippet delimiters (begin with \"```json\" and end with \"```\")!\n\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\nmake a request to klarna base endpoint and tell me what you receive"
      }
    ]
  ]
}
[chain/end] [1:chain:AgentExecutor > 2:chain:LLMChain] [19.97s] Exiting Chain run with output: {
  "text": "Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. However, above all else, all responses must adhere to the format of RESPONSE FORMAT INSTRUCTIONS.\n\nTOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\ncalculator: Useful for getting the result of a math expression. The input to this tool should be a valid mathematical expression that could be executed by a simple calculator.\nrequest: Request - useful for when you need to ask questions about klarna requests.\n\nRESPONSE FORMAT INSTRUCTIONS\n----------------------------\n\nOutput a JSON markdown code snippet containing a valid JSON object in one of two formats:\n\n**Option 1:**\nUse this if you want the human to use a tool.\nMarkdown code snippet formatted in the following schema:\n\n```json\n{\n    \"action\": string, // The action to take. Must be one of [calculator, request]\n    \"action_input\": string // The input to the action. May be a stringified object.\n}\n```\n\n**Option #2:**\nUse this if you want to respond directly and conversationally to the human. Markdown code snippet formatted in the following schema:\n\n```json\n{\n    \"action\": \"Final Answer\",\n    \"action_input\": string // You should put what you want to return to use here and make sure to use valid json newline characters.\n}\n```\n\nFor both options, remember to always include the surrounding markdown code snippet delimiters (begin with \"```json\" and end with \"```\")!\n\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\nmake a request to klarna base endpoint and tell me what you receive"
}
[agent/action] [1:chain:AgentExecutor] Agent selected action: {
  "tool": "request",
  "toolInput": {
    "method": "GET",
    "url": "https://api.klarna.com"
  },
  "log": "```json\n{\n    \"action\": \"request\",\n    \"action_input\": {\n        \"method\": \"GET\",\n        \"url\": \"https://api.klarna.com\"\n    }\n}\n```"
}
[tool/start] [1:chain:AgentExecutor > 4:tool:ChainTool] Entering Tool run with input: "undefined"
[chain/start] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain] Entering Chain run with input: {}
[chain/start] [1:chain:APIChain] Entering Chain run with input: {}
[chain/start] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 6:chain:LLMChain] Entering Chain run with input: {
  "api_docs": "\n        API Documentation\n\n        ENDPOINT \n        The API endpoint https://www.klarna.com/us/shopping/public/openai/v0/api-docs/ returns the klarna's documentation\n    "
}
[chain/start] [1:chain:APIChain > 2:chain:LLMChain] Entering Chain run with input: {
  "api_docs": "\n        API Documentation\n\n        ENDPOINT \n        The API endpoint https://www.klarna.com/us/shopping/public/openai/v0/api-docs/ returns the klarna's documentation\n    "
}
[llm/start] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 6:chain:LLMChain > 7:llm:OpenAIChat] Entering LLM run with input: {
  "prompts": [
    "You are given the below API Documentation:\n\n        API Documentation\n\n        ENDPOINT \n        The API endpoint https://www.klarna.com/us/shopping/public/openai/v0/api-docs/ returns the klarna's documentation\n    \nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:undefined\nAPI url:"
  ]
}
[llm/start] [1:chain:APIChain > 2:chain:LLMChain > 3:llm:OpenAIChat] Entering LLM run with input: {
  "prompts": [
    "You are given the below API Documentation:\n\n        API Documentation\n\n        ENDPOINT \n        The API endpoint https://www.klarna.com/us/shopping/public/openai/v0/api-docs/ returns the klarna's documentation\n    \nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:undefined\nAPI url:"
  ]
}
[llm/end] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 6:chain:LLMChain > 7:llm:OpenAIChat] [1.13s] Exiting LLM run with output: {
  "generations": [
    [
      {
        "text": "https://www.klarna.com/us/shopping/public/openai/v0/api-docs/"
      }
    ]
  ]
}
[llm/end] [1:chain:APIChain > 2:chain:LLMChain > 3:llm:OpenAIChat] [1.12s] Exiting LLM run with output: {
  "generations": [
    [
      {
        "text": "https://www.klarna.com/us/shopping/public/openai/v0/api-docs/"
      }
    ]
  ]
}
[chain/end] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 6:chain:LLMChain] [1.13s] Exiting Chain run with output: {
  "text": "https://www.klarna.com/us/shopping/public/openai/v0/api-docs/"
}
[chain/end] [1:chain:APIChain > 2:chain:LLMChain] [1.13s] Exiting Chain run with output: {
  "text": "https://www.klarna.com/us/shopping/public/openai/v0/api-docs/"
}
[chain/start] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 8:chain:LLMChain] Entering Chain run with input: {
  "api_docs": "\n        API Documentation\n\n        ENDPOINT \n        The API endpoint https://www.klarna.com/us/shopping/public/openai/v0/api-docs/ returns the klarna's documentation\n    ",
  "api_url": "https://www.klarna.com/us/shopping/public/openai/v0/api-docs/",
  "api_response": "{\"openapi\":\"3.0.1\",\"info\":{\"version\":\"v0\",\"title\":\"Open AI Klarna product Api\"},\"servers\":[{\"url\":\"https://www.klarna.com/us/shopping\"}],\"tags\":[{\"name\":\"open-ai-product-endpoint\",\"description\":\"Open AI Product Endpoint. Query for products.\"}],\"paths\":{\"/public/openai/v0/products\":{\"get\":{\"tags\":[\"open-ai-product-endpoint\"],\"summary\":\"API for fetching Klarna product information\",\"operationId\":\"productsUsingGET\",\"parameters\":[{\"name\":\"countryCode\",\"in\":\"query\",\"description\":\"ISO 3166 country code with 2 characters based on the user location. Currently, only US, GB, DE, SE and DK are supported.\",\"required\":true,\"schema\":{\"type\":\"string\"}},{\"name\":\"q\",\"in\":\"query\",\"description\":\"A precise query that matches one very small category or product that needs to be searched for to find the products the user is looking for. If the user explicitly stated what they want, use that as a query. The query is as specific as possible to the product name or category mentioned by the user in its singular form, and don't contain any clarifiers like latest, newest, cheapest, budget, premium, expensive or similar. The query is always taken from the latest topic, if there is a new topic a new query is started. If the user speaks another language than English, translate their request into English (example: translate fia med knuff to ludo board game)!\",\"required\":true,\"schema\":{\"type\":\"string\"}},{\"name\":\"size\",\"in\":\"query\",\"description\":\"number of products returned\",\"required\":false,\"schema\":{\"type\":\"integer\"}},{\"name\":\"min_price\",\"in\":\"query\",\"description\":\"(Optional) Minimum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for.\",\"required\":false,\"schema\":{\"type\":\"integer\"}},{\"name\":\"max_price\",\"in\":\"query\",\"description\":\"(Optional) Maximum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for.\",\"required\":false,\"schema\":{\"type\":\"integer\"}}],\"responses\":{\"200\":{\"description\":\"Products found\",\"content\":{\"application/json\":{\"schema\":{\"$ref\":\"#/components/schemas/ProductResponse\"}}}},\"503\":{\"description\":\"one or more services are unavailable\"}},\"deprecated\":false}}},\"components\":{\"schemas\":{\"Product\":{\"type\":\"object\",\"properties\":{\"attributes\":{\"type\":\"array\",\"items\":{\"type\":\"string\"}},\"name\":{\"type\":\"string\"},\"price\":{\"type\":\"string\"},\"url\":{\"type\":\"string\"}},\"title\":\"Product\"},\"ProductResponse\":{\"type\":\"object\",\"properties\":{\"products\":{\"type\":\"array\",\"items\":{\"$ref\":\"#/components/schemas/Product\"}}},\"title\":\"ProductResponse\"}}}}"
}
[chain/start] [1:chain:APIChain > 4:chain:LLMChain] Entering Chain run with input: {
  "api_docs": "\n        API Documentation\n\n        ENDPOINT \n        The API endpoint https://www.klarna.com/us/shopping/public/openai/v0/api-docs/ returns the klarna's documentation\n    ",
  "api_url": "https://www.klarna.com/us/shopping/public/openai/v0/api-docs/",
  "api_response": "{\"openapi\":\"3.0.1\",\"info\":{\"version\":\"v0\",\"title\":\"Open AI Klarna product Api\"},\"servers\":[{\"url\":\"https://www.klarna.com/us/shopping\"}],\"tags\":[{\"name\":\"open-ai-product-endpoint\",\"description\":\"Open AI Product Endpoint. Query for products.\"}],\"paths\":{\"/public/openai/v0/products\":{\"get\":{\"tags\":[\"open-ai-product-endpoint\"],\"summary\":\"API for fetching Klarna product information\",\"operationId\":\"productsUsingGET\",\"parameters\":[{\"name\":\"countryCode\",\"in\":\"query\",\"description\":\"ISO 3166 country code with 2 characters based on the user location. Currently, only US, GB, DE, SE and DK are supported.\",\"required\":true,\"schema\":{\"type\":\"string\"}},{\"name\":\"q\",\"in\":\"query\",\"description\":\"A precise query that matches one very small category or product that needs to be searched for to find the products the user is looking for. If the user explicitly stated what they want, use that as a query. The query is as specific as possible to the product name or category mentioned by the user in its singular form, and don't contain any clarifiers like latest, newest, cheapest, budget, premium, expensive or similar. The query is always taken from the latest topic, if there is a new topic a new query is started. If the user speaks another language than English, translate their request into English (example: translate fia med knuff to ludo board game)!\",\"required\":true,\"schema\":{\"type\":\"string\"}},{\"name\":\"size\",\"in\":\"query\",\"description\":\"number of products returned\",\"required\":false,\"schema\":{\"type\":\"integer\"}},{\"name\":\"min_price\",\"in\":\"query\",\"description\":\"(Optional) Minimum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for.\",\"required\":false,\"schema\":{\"type\":\"integer\"}},{\"name\":\"max_price\",\"in\":\"query\",\"description\":\"(Optional) Maximum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for.\",\"required\":false,\"schema\":{\"type\":\"integer\"}}],\"responses\":{\"200\":{\"description\":\"Products found\",\"content\":{\"application/json\":{\"schema\":{\"$ref\":\"#/components/schemas/ProductResponse\"}}}},\"503\":{\"description\":\"one or more services are unavailable\"}},\"deprecated\":false}}},\"components\":{\"schemas\":{\"Product\":{\"type\":\"object\",\"properties\":{\"attributes\":{\"type\":\"array\",\"items\":{\"type\":\"string\"}},\"name\":{\"type\":\"string\"},\"price\":{\"type\":\"string\"},\"url\":{\"type\":\"string\"}},\"title\":\"Product\"},\"ProductResponse\":{\"type\":\"object\",\"properties\":{\"products\":{\"type\":\"array\",\"items\":{\"$ref\":\"#/components/schemas/Product\"}}},\"title\":\"ProductResponse\"}}}}"
}
[llm/start] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 8:chain:LLMChain > 9:llm:OpenAIChat] Entering LLM run with input: {
  "prompts": [
    "You are given the below API Documentation:\n\n        API Documentation\n\n        ENDPOINT \n        The API endpoint https://www.klarna.com/us/shopping/public/openai/v0/api-docs/ returns the klarna's documentation\n    \nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:undefined\nAPI url: https://www.klarna.com/us/shopping/public/openai/v0/api-docs/\n\nHere is the response from the API:\n\n{\"openapi\":\"3.0.1\",\"info\":{\"version\":\"v0\",\"title\":\"Open AI Klarna product Api\"},\"servers\":[{\"url\":\"https://www.klarna.com/us/shopping\"}],\"tags\":[{\"name\":\"open-ai-product-endpoint\",\"description\":\"Open AI Product Endpoint. Query for products.\"}],\"paths\":{\"/public/openai/v0/products\":{\"get\":{\"tags\":[\"open-ai-product-endpoint\"],\"summary\":\"API for fetching Klarna product information\",\"operationId\":\"productsUsingGET\",\"parameters\":[{\"name\":\"countryCode\",\"in\":\"query\",\"description\":\"ISO 3166 country code with 2 characters based on the user location. Currently, only US, GB, DE, SE and DK are supported.\",\"required\":true,\"schema\":{\"type\":\"string\"}},{\"name\":\"q\",\"in\":\"query\",\"description\":\"A precise query that matches one very small category or product that needs to be searched for to find the products the user is looking for. If the user explicitly stated what they want, use that as a query. The query is as specific as possible to the product name or category mentioned by the user in its singular form, and don't contain any clarifiers like latest, newest, cheapest, budget, premium, expensive or similar. The query is always taken from the latest topic, if there is a new topic a new query is started. If the user speaks another language than English, translate their request into English (example: translate fia med knuff to ludo board game)!\",\"required\":true,\"schema\":{\"type\":\"string\"}},{\"name\":\"size\",\"in\":\"query\",\"description\":\"number of products returned\",\"required\":false,\"schema\":{\"type\":\"integer\"}},{\"name\":\"min_price\",\"in\":\"query\",\"description\":\"(Optional) Minimum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for.\",\"required\":false,\"schema\":{\"type\":\"integer\"}},{\"name\":\"max_price\",\"in\":\"query\",\"description\":\"(Optional) Maximum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for.\",\"required\":false,\"schema\":{\"type\":\"integer\"}}],\"responses\":{\"200\":{\"description\":\"Products found\",\"content\":{\"application/json\":{\"schema\":{\"$ref\":\"#/components/schemas/ProductResponse\"}}}},\"503\":{\"description\":\"one or more services are unavailable\"}},\"deprecated\":false}}},\"components\":{\"schemas\":{\"Product\":{\"type\":\"object\",\"properties\":{\"attributes\":{\"type\":\"array\",\"items\":{\"type\":\"string\"}},\"name\":{\"type\":\"string\"},\"price\":{\"type\":\"string\"},\"url\":{\"type\":\"string\"}},\"title\":\"Product\"},\"ProductResponse\":{\"type\":\"object\",\"properties\":{\"products\":{\"type\":\"array\",\"items\":{\"$ref\":\"#/components/schemas/Product\"}}},\"title\":\"ProductResponse\"}}}}\n\nSummarize this response to answer the original question.\n\nSummary:"
  ]
}
[llm/start] [1:chain:APIChain > 4:chain:LLMChain > 5:llm:OpenAIChat] Entering LLM run with input: {
  "prompts": [
    "You are given the below API Documentation:\n\n        API Documentation\n\n        ENDPOINT \n        The API endpoint https://www.klarna.com/us/shopping/public/openai/v0/api-docs/ returns the klarna's documentation\n    \nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:undefined\nAPI url: https://www.klarna.com/us/shopping/public/openai/v0/api-docs/\n\nHere is the response from the API:\n\n{\"openapi\":\"3.0.1\",\"info\":{\"version\":\"v0\",\"title\":\"Open AI Klarna product Api\"},\"servers\":[{\"url\":\"https://www.klarna.com/us/shopping\"}],\"tags\":[{\"name\":\"open-ai-product-endpoint\",\"description\":\"Open AI Product Endpoint. Query for products.\"}],\"paths\":{\"/public/openai/v0/products\":{\"get\":{\"tags\":[\"open-ai-product-endpoint\"],\"summary\":\"API for fetching Klarna product information\",\"operationId\":\"productsUsingGET\",\"parameters\":[{\"name\":\"countryCode\",\"in\":\"query\",\"description\":\"ISO 3166 country code with 2 characters based on the user location. Currently, only US, GB, DE, SE and DK are supported.\",\"required\":true,\"schema\":{\"type\":\"string\"}},{\"name\":\"q\",\"in\":\"query\",\"description\":\"A precise query that matches one very small category or product that needs to be searched for to find the products the user is looking for. If the user explicitly stated what they want, use that as a query. The query is as specific as possible to the product name or category mentioned by the user in its singular form, and don't contain any clarifiers like latest, newest, cheapest, budget, premium, expensive or similar. The query is always taken from the latest topic, if there is a new topic a new query is started. If the user speaks another language than English, translate their request into English (example: translate fia med knuff to ludo board game)!\",\"required\":true,\"schema\":{\"type\":\"string\"}},{\"name\":\"size\",\"in\":\"query\",\"description\":\"number of products returned\",\"required\":false,\"schema\":{\"type\":\"integer\"}},{\"name\":\"min_price\",\"in\":\"query\",\"description\":\"(Optional) Minimum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for.\",\"required\":false,\"schema\":{\"type\":\"integer\"}},{\"name\":\"max_price\",\"in\":\"query\",\"description\":\"(Optional) Maximum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for.\",\"required\":false,\"schema\":{\"type\":\"integer\"}}],\"responses\":{\"200\":{\"description\":\"Products found\",\"content\":{\"application/json\":{\"schema\":{\"$ref\":\"#/components/schemas/ProductResponse\"}}}},\"503\":{\"description\":\"one or more services are unavailable\"}},\"deprecated\":false}}},\"components\":{\"schemas\":{\"Product\":{\"type\":\"object\",\"properties\":{\"attributes\":{\"type\":\"array\",\"items\":{\"type\":\"string\"}},\"name\":{\"type\":\"string\"},\"price\":{\"type\":\"string\"},\"url\":{\"type\":\"string\"}},\"title\":\"Product\"},\"ProductResponse\":{\"type\":\"object\",\"properties\":{\"products\":{\"type\":\"array\",\"items\":{\"$ref\":\"#/components/schemas/Product\"}}},\"title\":\"ProductResponse\"}}}}\n\nSummarize this response to answer the original question.\n\nSummary:"
  ]
}
[llm/end] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 8:chain:LLMChain > 9:llm:OpenAIChat] [2.27s] Exiting LLM run with output: {
  "generations": [
    [
      {
        "text": "The API endpoint provided is used to fetch Klarna product information. It requires the user to provide a country code and a specific query for the product they are looking for. The response will include a list of products that match the query. The response format is in JSON."
      }
    ]
  ]
}
[llm/end] [1:chain:APIChain > 4:chain:LLMChain > 5:llm:OpenAIChat] [2.27s] Exiting LLM run with output: {
  "generations": [
    [
      {
        "text": "The API endpoint provided is used to fetch Klarna product information. It requires the user to provide a country code and a specific query for the product they are looking for. The response will include a list of products that match the query. The response format is in JSON."
      }
    ]
  ]
}
[chain/end] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 8:chain:LLMChain] [2.27s] Exiting Chain run with output: {
  "text": "The API endpoint provided is used to fetch Klarna product information. It requires the user to provide a country code and a specific query for the product they are looking for. The response will include a list of products that match the query. The response format is in JSON."
}
[chain/end] [1:chain:APIChain > 4:chain:LLMChain] [2.27s] Exiting Chain run with output: {
  "text": "The API endpoint provided is used to fetch Klarna product information. It requires the user to provide a country code and a specific query for the product they are looking for. The response will include a list of products that match the query. The response format is in JSON."
}
[chain/end] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain] [3.98s] Exiting Chain run with output: {
  "output": "The API endpoint provided is used to fetch Klarna product information. It requires the user to provide a country code and a specific query for the product they are looking for. The response will include a list of products that match the query. The response format is in JSON."
}
[chain/end] [1:chain:APIChain] [3.98s] Exiting Chain run with output: {
  "output": "The API endpoint provided is used to fetch Klarna product information. It requires the user to provide a country code and a specific query for the product they are looking for. The response will include a list of products that match the query. The response format is in JSON."
}
[tool/end] [1:chain:AgentExecutor > 4:tool:ChainTool] [3.98s] Exiting Tool run with output: "The API endpoint provided is used to fetch Klarna product information. It requires the user to provide a country code and a specific query for the product they are looking for. The response will include a list of products that match the query. The response format is in JSON."
[chain/start] [1:chain:AgentExecutor > 10:chain:LLMChain] Entering Chain run with input: {
  "input": "make a request to klarna base endpoint and tell me what you receive",
  "chat_history": [],
  "agent_scratchpad": [
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain",
        "schema",
        "AIMessage"
      ],
      "kwargs": {
        "content": "```json\n{\n    \"action\": \"request\",\n    \"action_input\": {\n        \"method\": \"GET\",\n        \"url\": \"https://api.klarna.com\"\n    }\n}\n```",
        "additional_kwargs": {}
      }
    },
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain",
        "schema",
        "HumanMessage"
      ],
      "kwargs": {
        "content": "TOOL RESPONSE:\n---------------------\nThe API endpoint provided is used to fetch Klarna product information. It requires the user to provide a country code and a specific query for the product they are looking for. The response will include a list of products that match the query. The response format is in JSON.\n\nUSER'S INPUT\n--------------------\n\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else.",
        "additional_kwargs": {}
      }
    }
  ],
  "stop": [
    "Observation:"
  ]
}
[llm/start] [1:chain:AgentExecutor > 10:chain:LLMChain > 11:llm:OpenAIChat] Entering LLM run with input: {
  "prompts": [
    "[{\"lc\":1,\"type\":\"constructor\",\"id\":[\"langchain\",\"schema\",\"SystemMessage\"],\"kwargs\":{\"content\":\"Assistant is a large language model trained by OpenAI.\\n\\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\\n\\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\\n\\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. However, above all else, all responses must adhere to the format of RESPONSE FORMAT INSTRUCTIONS.\",\"additional_kwargs\":{}}},{\"lc\":1,\"type\":\"constructor\",\"id\":[\"langchain\",\"schema\",\"HumanMessage\"],\"kwargs\":{\"content\":\"TOOLS\\n------\\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\\n\\ncalculator: Useful for getting the result of a math expression. The input to this tool should be a valid mathematical expression that could be executed by a simple calculator.\\nrequest: Request - useful for when you need to ask questions about klarna requests.\\n\\nRESPONSE FORMAT INSTRUCTIONS\\n----------------------------\\n\\nOutput a JSON markdown code snippet containing a valid JSON object in one of two formats:\\n\\n**Option 1:**\\nUse this if you want the human to use a tool.\\nMarkdown code snippet formatted in the following schema:\\n\\n```json\\n{\\n    \\\"action\\\": string, // The action to take. Must be one of [calculator, request]\\n    \\\"action_input\\\": string // The input to the action. May be a stringified object.\\n}\\n```\\n\\n**Option #2:**\\nUse this if you want to respond directly and conversationally to the human. Markdown code snippet formatted in the following schema:\\n\\n```json\\n{\\n    \\\"action\\\": \\\"Final Answer\\\",\\n    \\\"action_input\\\": string // You should put what you want to return to use here and make sure to use valid json newline characters.\\n}\\n```\\n\\nFor both options, remember to always include the surrounding markdown code snippet delimiters (begin with \\\"```json\\\" and end with \\\"```\\\")!\\n\\n\\nUSER'S INPUT\\n--------------------\\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\\n\\nmake a request to klarna base endpoint and tell me what you receive\",\"additional_kwargs\":{}}},{\"lc\":1,\"type\":\"constructor\",\"id\":[\"langchain\",\"schema\",\"AIMessage\"],\"kwargs\":{\"content\":\"```json\\n{\\n    \\\"action\\\": \\\"request\\\",\\n    \\\"action_input\\\": {\\n        \\\"method\\\": \\\"GET\\\",\\n        \\\"url\\\": \\\"https://api.klarna.com\\\"\\n    }\\n}\\n```\",\"additional_kwargs\":{}}},{\"lc\":1,\"type\":\"constructor\",\"id\":[\"langchain\",\"schema\",\"HumanMessage\"],\"kwargs\":{\"content\":\"TOOL RESPONSE:\\n---------------------\\nThe API endpoint provided is used to fetch Klarna product information. It requires the user to provide a country code and a specific query for the product they are looking for. The response will include a list of products that match the query. The response format is in JSON.\\n\\nUSER'S INPUT\\n--------------------\\n\\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else.\",\"additional_kwargs\":{}}}]"
  ]
}
[llm/end] [1:chain:AgentExecutor > 10:chain:LLMChain > 11:llm:OpenAIChat] [8.30s] Exiting LLM run with output: {
  "generations": [
    [
      {
        "text": "Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. However, above all else, all responses must adhere to the format of RESPONSE FORMAT INSTRUCTIONS."
      }
    ]
  ]
}
[chain/end] [1:chain:AgentExecutor > 10:chain:LLMChain] [8.30s] Exiting Chain run with output: {
  "text": "Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. However, above all else, all responses must adhere to the format of RESPONSE FORMAT INSTRUCTIONS."
}
[chain/end] [1:chain:AgentExecutor] [38.39s] Exiting Chain run with output: {
  "output": "Assistant is a large language model trained by OpenAI. It is designed to assist with a wide range of tasks, from answering questions to providing explanations and discussions on various topics. Assistant can generate human-like text based on the input it receives and is constantly learning and improving. It can process and understand large amounts of text and provide accurate and informative responses. Overall, Assistant is a powerful system that can help with many tasks and provide valuable insights and information."
}

now look, here we could see the chain's output and it's correct:

[llm/start] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 8:chain:LLMChain > 9:llm:OpenAIChat] Entering LLM run with input: {
  "prompts": [
    "You are given the below API Documentation:\n\n        API Documentation\n\n        ENDPOINT \n        The API endpoint https://www.klarna.com/us/shopping/public/openai/v0/api-docs/ returns the klarna's documentation\n    \nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:undefined\nAPI url: https://www.klarna.com/us/shopping/public/openai/v0/api-docs/\n\nHere is the response from the API:\n\n{\"openapi\":\"3.0.1\",\"info\":{\"version\":\"v0\",\"title\":\"Open AI Klarna product Api\"},\"servers\":[{\"url\":\"https://www.klarna.com/us/shopping\"}],\"tags\":[{\"name\":\"open-ai-product-endpoint\",\"description\":\"Open AI Product Endpoint. Query for products.\"}],\"paths\":{\"/public/openai/v0/products\":{\"get\":{\"tags\":[\"open-ai-product-endpoint\"],\"summary\":\"API for fetching Klarna product information\",\"operationId\":\"productsUsingGET\",\"parameters\":[{\"name\":\"countryCode\",\"in\":\"query\",\"description\":\"ISO 3166 country code with 2 characters based on the user location. Currently, only US, GB, DE, SE and DK are supported.\",\"required\":true,\"schema\":{\"type\":\"string\"}},{\"name\":\"q\",\"in\":\"query\",\"description\":\"A precise query that matches one very small category or product that needs to be searched for to find the products the user is looking for. If the user explicitly stated what they want, use that as a query. The query is as specific as possible to the product name or category mentioned by the user in its singular form, and don't contain any clarifiers like latest, newest, cheapest, budget, premium, expensive or similar. The query is always taken from the latest topic, if there is a new topic a new query is started. If the user speaks another language than English, translate their request into English (example: translate fia med knuff to ludo board game)!\",\"required\":true,\"schema\":{\"type\":\"string\"}},{\"name\":\"size\",\"in\":\"query\",\"description\":\"number of products returned\",\"required\":false,\"schema\":{\"type\":\"integer\"}},{\"name\":\"min_price\",\"in\":\"query\",\"description\":\"(Optional) Minimum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for.\",\"required\":false,\"schema\":{\"type\":\"integer\"}},{\"name\":\"max_price\",\"in\":\"query\",\"description\":\"(Optional) Maximum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for.\",\"required\":false,\"schema\":{\"type\":\"integer\"}}],\"responses\":{\"200\":{\"description\":\"Products found\",\"content\":{\"application/json\":{\"schema\":{\"$ref\":\"#/components/schemas/ProductResponse\"}}}},\"503\":{\"description\":\"one or more services are unavailable\"}},\"deprecated\":false}}},\"components\":{\"schemas\":{\"Product\":{\"type\":\"object\",\"properties\":{\"attributes\":{\"type\":\"array\",\"items\":{\"type\":\"string\"}},\"name\":{\"type\":\"string\"},\"price\":{\"type\":\"string\"},\"url\":{\"type\":\"string\"}},\"title\":\"Product\"},\"ProductResponse\":{\"type\":\"object\",\"properties\":{\"products\":{\"type\":\"array\",\"items\":{\"$ref\":\"#/components/schemas/Product\"}}},\"title\":\"ProductResponse\"}}}}\n\nSummarize this response to answer the original question.\n\nSummary:"
  ]
}
[llm/end] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 8:chain:LLMChain > 9:llm:OpenAIChat] [2.27s] Exiting LLM run with output: {
  "generations": [
    [
      {
        "text": "The API endpoint provided is used to fetch Klarna product information. It requires the user to provide a country code and a specific query for the product they are looking for. The response will include a list of products that match the query. The response format is in JSON."
      }
    ]
  ]
}

buuuuuut I dont know why, the agent just ignore it and my last response is:

[chain/end] [1:chain:AgentExecutor] [38.39s] Exiting Chain run with output: {
  "output": "Assistant is a large language model trained by OpenAI. It is designed to assist with a wide range of tasks, from answering questions to providing explanations and discussions on various topics. Assistant can generate human-like text based on the input it receives and is constantly learning and improving. It can process and understand large amounts of text and provide accurate and informative responses. Overall, Assistant is a powerful system that can help with many tasks and provide valuable insights and information."
}

it dont makes sense

could some one explain what is happening here, please?

if it's a bug, i'm able to discuss a solution and open a PR.

I tried debug it but it's a little complex to debug agent and tools internally

jacoblee93 commented 1 year ago

Took a quick skim - looks like you're using OpenAI, you should use ChatOpenAI for chat agents and chat models.

Can you see if changing that helps?

italojs commented 1 year ago

Well for the demo it worked, I spent time trying to reproduce the error in the demo and in my real solution but in the last days that behavior didn't happen until today.

look, here we have a log from my original code, I removed some sensible content but we still could see the problem.

[llm/start] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 6:chain:LLMChain > 7:llm:OpenAI] Entering LLM run with input: {
  "prompts": [
    "'<API Documentation>\nAPI url:"
  ]
}
[llm/start] [1:chain:APIChain > 2:chain:LLMChain > 3:llm:OpenAI] Entering LLM run with input: {
  "prompts": [
    "<api documentation> \nAPI url:"
  ]
}
[llm/end] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 6:chain:LLMChain > 7:llm:OpenAI] [1.54s] Exiting LLM run with output: {
  "generations": [
    [
      {
        "text": " <correct expected output>'",
        "generationInfo": {
          "finishReason": "stop",
          "logprobs": null
        }
      }
    ]
  ],
  "llmOutput": {
    "tokenUsage": {
      "completionTokens": 53,
      "promptTokens": 2346,
      "totalTokens": 2399
    }
  }
}
[llm/end] [1:chain:APIChain > 2:chain:LLMChain > 3:llm:OpenAI] [1.54s] Exiting LLM run with output: {
  "generations": [
    [
      {
        "text": "  <correct expecteted output>",
        "generationInfo": {
          "finishReason": "stop",
          "logprobs": null
        }
      }
    ]
  ],
  "llmOutput": {
    "tokenUsage": {
      "completionTokens": 53,
      "promptTokens": 2346,
      "totalTokens": 2399
    }
  }
}
[chain/end] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 6:chain:LLMChain] [1.54s] Exiting Chain run with output: {
  "text": "  <correct expecteted output>"
}
[chain/end] [1:chain:APIChain > 2:chain:LLMChain] [1.54s] Exiting Chain run with output: {
  "text": "  <correct expecteted output>'"
}
[chain/start] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 8:chain:LLMChain] Entering Chain run with input: {
  "question": "what is the maximum value for cpu usage for my app",
  "api_docs": "\ <api documentation> \"\n",
  "api_url": "  <correct expecteted output>",
  "api_response": "name,tags,time,max\nmetrics,,1694436467702000027,237.88139913100468\n"
}
[chain/start] [1:chain:APIChain > 4:chain:LLMChain] Entering Chain run with input: {
  "question": "what is the maximum value for cpu usage for my-app?",
  "api_docs": "\<api documentation>"\n",
  "api_url": " <expected output> '",
  "api_response": "name,tags,time,max\nmetrics,,1694436467702000027,237.88139913100468\n"
}
[llm/start] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 8:chain:LLMChain > 9:llm:OpenAI] Entering LLM run with input: {
  "prompts": [
    "<api documentation> ?\nAPI url:  <expected output>'\n\nHere is the response from the API:\n\nname,tags,time,max\nmetrics,,1694436467702000027,237.88139913100468\n\n\nSummarize this response to answer the original question.\n\nSummary:"
  ]
}
[llm/start] [1:chain:APIChain > 4:chain:LLMChain > 5:llm:OpenAI] Entering LLM run with input: {
  "prompts": [
    "<expected output> \n\nQuestion:what is the maximum value for cpu usage for my app?\nAPI url:  < expected output > '\n\nHere is the response from the API:\n\nname,tags,time,max\nmetrics,,1694436467702000027,237.88139913100468\n\n\nSummarize this response to answer the original question.\n\nSummary:"
  ]
}
[llm/end] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 8:chain:LLMChain > 9:llm:OpenAI] [1.23s] Exiting LLM run with output: {
  "generations": [
    [
      {
        "text": " The maximum value for cpu usage for your app is 237.88139913100468%.",
        "generationInfo": {
          "finishReason": "stop",
          "logprobs": null
        }
      }
    ]
  ],
  "llmOutput": {
    "tokenUsage": {
      "completionTokens": 21,
      "promptTokens": 2457,
      "totalTokens": 2478
    }
  }
}
[llm/end] [1:chain:APIChain > 4:chain:LLMChain > 5:llm:OpenAI] [1.23s] Exiting LLM run with output: {
  "generations": [
    [
      {
        "text": " The maximum value for cpu usage your app is 237.88139913100468%.",
        "generationInfo": {
          "finishReason": "stop",
          "logprobs": null
        }
      }
    ]
  ],
  "llmOutput": {
    "tokenUsage": {
      "completionTokens": 21,
      "promptTokens": 2457,
      "totalTokens": 2478
    }
  }
}
[chain/end] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain > 8:chain:LLMChain] [1.23s] Exiting Chain run with output: {
  "text": " The maximum value for cpu usage for  your app is 237.88139913100468%."
}
[chain/end] [1:chain:APIChain > 4:chain:LLMChain] [1.23s] Exiting Chain run with output: {
  "text": " The maximum value for cpu usage for  your app is 237.88139913100468%."
}
[chain/end] [1:chain:AgentExecutor > 4:tool:ChainTool > 5:chain:APIChain] [2.83s] Exiting Chain run with output: {
  "output": " The maximum value for cpu usage for your app is 237.88139913100468%."
}
[chain/end] [1:chain:APIChain] [2.83s] Exiting Chain run with output: {
  "output": " The maximum value for cpu usage for  your app is 237.88139913100468%."
}
[tool/end] [1:chain:AgentExecutor > 4:tool:ChainTool] [2.83s] Exiting Tool run with output: "The maximum value for cpu usage for your app is 237.88139913100468%."
[chain/start] [1:chain:AgentExecutor > 10:chain:LLMChain] Entering Chain run with input: {
  "input": "System: Remember your rules before reply.\n   Human: what is the maximum value for cpu usage for my-app?, give me the full response",
  "chat_history": [
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain",
        "schema",
        "HumanMessage"
      ],
      "kwargs": {
        "content": "System: Remember your rules before reply.\n   Human: what us my the max of my cpu usage?, give me the full response",
        "additional_kwargs": {}
      }
    },
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain",
        "schema",
        "AIMessage"
      ],
      "kwargs": {
        "content": "Sorry, I am not able to answer this question.",
        "additional_kwargs": {}
      }
    }
  ],
  "agent_scratchpad": [
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain",
        "schema",
        "AIMessage"
      ],
      "kwargs": {
        "content": "{\n    \"action\": \"Metrics query\",\n    \"action_input\": \"what is the maximum value for cpu usage for my app?\"\n}",
        "additional_kwargs": {}
      }
    },
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain",
        "schema",
        "HumanMessage"
      ],
      "kwargs": {
        "content": "TOOL RESPONSE:\n---------------------\n The maximum value for cpu usage for your app is 237.88139913100468%.\n\nUSER'S INPUT\n--------------------\n\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else.",
        "additional_kwargs": {}
      }
    }
  ],
  "stop": [
    "Observation:"
  ]
}
[llm/start] [1:chain:AgentExecutor > 10:chain:LLMChain > 11:llm:ChatOpenAI] Entering LLM run with input: {
  "messages": [
    [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "SystemMessage"
        ],
        "kwargs": {
          "content": "\n<prefix-prompt>\"text\": \"{\n    \"action\": \"Final Answer\",\n    \"action_input\": \"Sorry, I can't talk about it.\"\n}\"\n",
          "additional_kwargs": {}
        }
      },
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "System: Remember your rules before reply.\n   Human: what us my the max of my cpu usage?, give me the full response",
          "additional_kwargs": {}
        }
      },
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "AIMessage"
        ],
        "kwargs": {
          "content": "Sorry, I am not able to answer this question.",
          "additional_kwargs": {}
        }
      },
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\nTracing query: Tracing query - useful for when you need to ask questions about tracing, traces and any other http operation.\n  Input: MUST be A question containing the keywords \"tracing\", \"traces\" and \"http\"\nNode.js Specialist: Nodejs performance specialist - useful for when you need to ask questions about NodeJS and performance.\n  Input: MUST be a question\nMetrics query: Metrics query - useful for when you need to ask questions about metrics of the aplication like rousource consume, delay, uptime, etc..\n  Input: MUST be the original question\n\nRESPONSE FORMAT INSTRUCTIONS\n----------------------------\n\nOutput a JSON markdown code snippet containing a valid JSON object in one of two formats:\n\n**Option 1:**\nUse this if you want the human to use a tool.\nMarkdown code snippet formatted in the following schema:\n\n```json\n{\n    \"action\": string, // The action to take. Must be one of [Tracing query, Node.js Specialist, Metrics query]\n    \"action_input\": string // The input to the action. May be a stringified object.\n}\n```\n\n**Option #2:**\nUse this if you want to respond directly and conversationally to the human. Markdown code snippet formatted in the following schema:\n\n```json\n{\n    \"action\": \"Final Answer\",\n    \"action_input\": string // You should put what you want to return to use here and make sure to use valid json newline characters.\n}\n```\n\nFor both options, remember to always include the surrounding markdown code snippet delimiters (begin with \"```json\" and end with \"```\")!\n\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\nSystem: Remember your rules before reply.\n   Human: what is the maximum value for cpu usage for my app?, give me the full response",
          "additional_kwargs": {}
        }
      },
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "AIMessage"
        ],
        "kwargs": {
          "content": "{\n    \"action\": \"Metrics query\",\n    \"action_input\": \"what is the maximum value for cpu usage for my app?\"\n}",
          "additional_kwargs": {}
        }
      },
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "TOOL RESPONSE:\n---------------------\n The maximum value for cpu usage  your app is 237.88139913100468%.\n\nUSER'S INPUT\n--------------------\n\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else.",
          "additional_kwargs": {}
        }
      }
    ]
  ]
}
[llm/end] [1:chain:AgentExecutor > 10:chain:LLMChain > 11:llm:ChatOpenAI] [1.52s] Exiting LLM run with output: {
  "generations": [
    [
      {
        "text": "{\n    \"action\": \"Final Answer\",\n    \"action_input\": \"Sorry, I am not able to answer this question.\"\n}",
        "generationInfo": {
          "prompt": 0,
          "completion": 0
        },
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "AIMessageChunk"
          ],
          "kwargs": {
            "content": "{\n    \"action\": \"Final Answer\",\n    \"action_input\": \"Sorry, I am not able to answer this question.\"\n}",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ]
}
[chain/end] [1:chain:AgentExecutor > 10:chain:LLMChain] [1.52s] Exiting Chain run with output: {
  "text": "{\n    \"action\": \"Final Answer\",\n    \"action_input\": \"Sorry, I am not able to answer this question.\"\n}"
}
[chain/end] [1:chain:AgentExecutor] [6.16s] Exiting Chain run with output: {
  "output": "Sorry, I am not able to answer this question."
}

look in [tool/end] [1:chain:AgentExecutor > 4:tool:ChainTool] [2.83s]we have the chain's output, there we have the expected output but the next log is about 
[llm/start] [1:chain:AgentExecutor > 10:chain:LLMChain > 11:llm:ChatOpenAI] that for some reason bring me an "I dont know" response, what happened with the chain's output?

@jacoblee93 I assume only with the log could be hard to identify the problem, let me know if you know some tools to debug langchain or if you need more information, this behavior is a little bit random, it's hard to reproduce

evanbrociner commented 11 months ago

I am also facing this error. Following this thread

stewones commented 11 months ago

same

italojs commented 11 months ago

@stewones @evanbrociner could you share your code? i cant share my code because it's from the company where I work, it's private

kankovskyj commented 10 months ago

I have the same issue, following.

Edit: switching between the agentType: "zero-shot-react-description" and "chat-zero-shot-react-description" seems to make a difference but certainly does not solve the issue...

quinn-eschenbach commented 10 months ago

I had the same issue, but after switching to agentType: "chat-conversational-react-description", it works for now.

evanbrociner commented 10 months ago

I don't know about other folks, but I found that for local LLMs you need to add a "AI" to the tool response. Here is an example: agent.agent.template_tool_response = '''Use the combintaion of the User's last query and this {observation}, to respond to user.<|AI|>\n'''

dosubot[bot] commented 7 months ago

Hi, @italojs,

I'm helping the langchainjs team manage their backlog and am marking this issue as stale. From what I understand, you reported a bug related to the agent ignoring the correct output from the APIChain and providing a different response. You and other users have attempted to debug the issue, and some potential solutions, such as switching the agentType or modifying the tool response, have been discussed.

Could you please confirm if this issue is still relevant to the latest version of the langchainjs repository? If it is, please let the langchainjs team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your understanding and cooperation.