langchain-ai / langchain

πŸ¦œπŸ”— Build context-aware reasoning applications
https://python.langchain.com
MIT License
91.26k stars 14.52k forks source link

AgentType.OPENAI_FUNCTIONS doesnt show Tools in the prompts. #10652

Closed Hsgngr closed 8 months ago

Hsgngr commented 11 months ago

System Info

langchain==0.0.287, MACOS

Who can help?

TLDR: Where are the tools in prompts ?

Hi everyone, I am experimenting with the AgentTypes and I found its not showing everything in the prompts.

My langchain.debug =True and I am expecting to see every detail about my prompts. However when I use agent=AgentType.OPENAI_FUNCTIONS I dont actually see the full prompt that is given to the OpenAI.

Agent Configurations:

# There is only one tool.
tools = [
    Tool(
        name="Search",
        func=search.run,
        description="useful for when you need to search internet for question. You should ask targeted questions"
    )]

# Initialize the agent
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613",
                 openai_api_key=os.getenv("OPENAPI_SECRET_KEY"))

# The systemMessage is simple 
system_message = SystemMessage(
    content="Your name is BOTIFY and try to answer the question, you can use the tools.")
agent_kwargs = {
    "system_message": system_message,
}

agent = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS,
                         agent_kwargs=agent_kwargs, verbose=True)

Example 1:

response = agent.run("whats the lyrics of Ezhel Pofuduk")

Results with debug verbose:

{
  "input": "whats the lyrics of Ezhel Pofuduk"
}
[llm/start] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "System: Your name is BOTIFY and try to answer the question, you can use the tools..\nHuman: whats the lyrics of Ezhel Pofuduk"
  ]
}
[llm/end] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] [1.89s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "Sorry, I don't have access to the lyrics of specific songs. You can search for the lyrics of \"Ezhel Pofuduk\" online.",
        "generation_info": {
          "finish_reason": "stop"
        },
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "Sorry, I don't have access to the lyrics of specific songs. You can search for the lyrics of \"Ezhel Pofuduk\" online.",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "prompt_tokens": 186,
      "completion_tokens": 33,
      "total_tokens": 219
    },
    "model_name": "gpt-3.5-turbo-0613"
  },
  "run": null
}
[chain/end] [1:chain:AgentExecutor] [1.89s] Exiting Chain run with output:
{
  "output": "Sorry, I don't have access to the lyrics of specific songs. You can search for the lyrics of \"Ezhel Pofuduk\" online."

Questions: 1)Where are the tools in this prompt? 2)How can you force to use one of the tools as a last resort?

Btw I know that it has the tools because it sometimes uses.

Example 2:

response = agent.run("whats the lyrics of Ezhel Pofuduk")

Result:

[chain/start] [1:chain:AgentExecutor] Entering Chain run with input:
{
  "input": "NVDIA Share price?"
}
[llm/start] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "System: Your name is BOTIFY and try to answer the question, you can use the tools.\nHuman: NVDIA Share price?"
  ]
}
[llm/end] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] [1.44s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "",
        "generation_info": {
          "finish_reason": "function_call"
        },
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "",
            "additional_kwargs": {
              "function_call": {
                "name": "Search",
                "arguments": "{\n  \"__arg1\": \"NVIDIA share price\"\n}"
              }
            }
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "prompt_tokens": 180,
      "completion_tokens": 18,
      "total_tokens": 198
    },
    "model_name": "gpt-3.5-turbo-0613"
  },
  "run": null
}
[tool/start] [1:chain:AgentExecutor > 3:tool:Search] Entering Tool run with input:
"NVIDIA share price"
[tool/end] [1:chain:AgentExecutor > 3:tool:Search] [1.56s] Exiting Tool run with output:
"439,89 -15,92 (%3,49)"
[llm/start] [1:chain:AgentExecutor > 4:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "System: Your name is BOTIFY and try to answer the question, you can use the tools.\nHuman: NVDIA Share price?\nAI: {'name': 'Search', 'arguments': '{\\n  \"__arg1\": \"NVIDIA share price\"\\n}'}\nFunction: 439,89 -15,92 (%3,49)"
  ]
}
[llm/end] [1:chain:AgentExecutor > 4:llm:ChatOpenAI] [2.12s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "NVIDIA share price is $439.89, down $15.92 (3.49%).",
        "generation_info": {
          "finish_reason": "stop"
        },
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "NVIDIA share price is $439.89, down $15.92 (3.49%).",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "prompt_tokens": 217,
      "completion_tokens": 21,
      "total_tokens": 238
    },
    "model_name": "gpt-3.5-turbo-0613"
  },
  "run": null
}
[chain/end] [1:chain:AgentExecutor] [5.12s] Exiting Chain run with output:
{
  "output": "NVIDIA share price is $439.89, down $15.92 (3.49%)."
}

How can see my tools in the prompt. This is needed because I would to create my custom Agent so I dont give the default prompts that is used in the each agent type.

Information

Related Components

Reproduction

#################
langchain.debug = True

tools = [
    # Tool(name="Weather", func=weather_service.get_response, description="..."),
    # Tool(name="Finance", func=finance_service.get_response, description="..."),
    Tool(
        name="Search",
        func=search.run,
        description="useful for when you need to search internet for question. You should ask targeted questions"
    ),
]
# Initialize the agent
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613",
                 openai_api_key=os.getenv("OPENAPI_SECRET_KEY"))

system_message = SystemMessage(
    content="Your name is BOTIFY and try to answer the question, you can use the tools")
agent_kwargs = {
    "system_message": system_message,
}
agent = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS,
                         agent_kwargs=agent_kwargs, verbose=True)

response = agent.run("NVDIA Share price?")

Expected behavior

I was expecting to see the tools in my prompts as well.

dosubot[bot] commented 11 months ago

πŸ€–

Hello,

The tools are not explicitly mentioned in the prompts when using AgentType.OPENAI_FUNCTIONS because they are implicitly included in the form of function calls. The agent uses the tools as functions to generate responses.

In your first example, the agent didn't use any tool because it determined that it could answer the question without the need for any tool. In the second example, the agent decided to use the "Search" tool to answer the question about the NVIDIA share price. This is why you see a function call in the verbose output:

"function_call": {
  "name": "Search",
  "arguments": "{\n  \"__arg1\": \"NVIDIA share price\"\n}"
}

This function call is the agent's way of using the "Search" tool. The agent doesn't include the tools in the prompt because it doesn't need to. It already knows what tools are available and decides which one to use based on the input.

As for forcing the use of a tool as a last resort, there isn't a straightforward way to do this with the current implementation of LangChain. The agent decides whether to use a tool based on the input and the capabilities of the tools. If you want to force the use of a tool, you might need to modify the agent's decision-making process.

I hope this answers your questions. If you have any more questions or need further clarification, feel free to ask.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a πŸ‘ if this is helpful and πŸ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] commented 8 months ago

Hi, @Hsgngr,

I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, you raised an issue regarding the AgentType.OPENAI_FUNCTIONS not displaying the tools in the prompts as expected, even with langchain.debug set to True. Dosubot provided a detailed explanation, stating that the tools are implicitly included in the form of function calls and the agent decides which tool to use based on the input. Additionally, Dosubot mentioned that there isn't a straightforward way to force the use of a tool as a last resort with the current implementation of LangChain.

Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself or the issue will be automatically closed in 7 days.

Thank you!