langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
93.05k stars 14.95k forks source link

`langchain.agents.initialize_agent` does not support custom prompt template #23884

Open JeevansSP opened 3 months ago

JeevansSP commented 3 months ago

Checked other resources

Example Code

from langchain.agents import initialize_agent, AgentType
from langchain.agents import Tool
from langchain_experimental.utilities import PythonREPL
import datetime
from langchain.agents import AgentExecutor
from langchain.chains.conversation.memory import (
    ConversationBufferMemory,
)
from langchain.prompts import MessagesPlaceholder
from langchain_community.chat_models import BedrockChat
from langchain.agents import OpenAIMultiFunctionsAgent

# You can create the tool to pass to an agent
repl_tool = Tool(
    name="python_repl",
    description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.",
    func=PythonREPL().run,
)

memory = ConversationBufferMemory(return_messages=True, k=10, memory_key="chat_history")

prompt = OpenAIMultiFunctionsAgent.create_prompt(
    system_message=SystemMessage(content="You are an helpful AI bot"),
    extra_prompt_messages=[MessagesPlaceholder(variable_name="chat_history")],
)

llm = BedrockChat(
    model_id="anthropic.claude-3-5-sonnet-20240620-v1:0",
    client=client, #initialized elsewhere
    model_kwargs={"max_tokens": 4050, "temperature": 0.5},
    verbose=True,
)

tools = [
    repl_tool,
]

agent_executor = initialize_agent(
    tools,
    llm,
    agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True,
    max_iterations=10,
    memory=memory,
    prompt=prompt,
)

res = agent_executor.invoke({
    'input': 'hi how are you?'
})
print(res['output']

# Hello! I'm an AI assistant, so I don't have feelings, but I'm functioning well and ready to help. How can I assist you today?

res=agent_executor.invoke({
"input": "what was my previous message?"
})

print(res['output']

# I'm sorry, but I don't have access to any previous messages. Each interaction starts fresh, so I don't have information about what you said before this current question. If you have a specific topic or question you'd like to discuss, please feel free to ask and I'll be happy to help.

# but when I checked the memory buffer 

print(memory.buffer)

# [HumanMessage(content='hi how are you?'), AIMessage(content="Hello! As an AI assistant, I don't have feelings, but I'm functioning well and ready to help you. How can I assist you today?"), HumanMessage(content='hi how are you?'), AIMessage(content="Hello! I'm an AI assistant, so I don't have feelings, but I'm functioning well and ready to help. How can I assist you today?"), HumanMessage(content='what was my previous message?'), AIMessage(content="I'm sorry, but I don't have access to any previous messages. Each interaction starts fresh, so I don't have information about what you said before this current question. If you have a specific topic or question you'd like to discuss, please feel free to ask and I'll be happy to help.")]

# As you can see memory is getting updated 
# so I checked the prompt template of the agent executor 

pprint(agent_executor.agent.llm_chain.prompt)

# ChatPromptTemplate(input_variables=['agent_scratchpad', 'input'], messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='Respond to the human as helpfully and accurately as possible. You have access to the following tools:\n\npython_repl: A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`., args: {{\'tool_input\': {{\'type\': \'string\'}}}}\n\nUse a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or python_repl\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{\n  "action": $TOOL_NAME,\n  "action_input": $INPUT\n}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{\n  "action": "Final Answer",\n  "action_input": "Final response to human"\n}}\n```\n\nBegin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:')), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['agent_scratchpad', 'input'], template='{input}\n\n{agent_scratchpad}'))])

# As you can see there is no input variable placeholder for `chat_memory`

Error Message and Stack Trace (if applicable)

No response

Description

System Info

System Information
------------------
> OS:  Darwin
> OS Version:  Darwin Kernel Version 23.5.0: Wed May  1 20:13:18 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6030
> Python Version:  3.12.2 (v3.12.2:6abddd9f6a, Feb  6 2024, 17:02:06) [Clang 13.0.0 (clang-1300.0.29.30)]

Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.1.11
> langchain_community: 0.0.27
> langsmith: 0.1.80
> langchain_anthropic: 0.1.15
> langchain_aws: 0.1.6
> langchain_experimental: 0.0.53
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1

Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:

> langgraph
> langserve
JeevansSP commented 3 months ago

instead of OpenAIMultiFunctionsAgent.create_prompt I have tried with a normal chat prompt template, the result is still the same