Closed jsemrau closed 9 months ago
🤖
Hello,
To modify your ReAct agent with Memory and Tools in Streamlit to wait for an observation from the "human" tool, you can customize the input_func
when loading the tools. This function will be used to get the user's input. Here is how you can do it:
from langchain.llms import OpenAI
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.callbacks import StreamlitCallbackHandler
import streamlit as st
def get_input() -> str:
if prompt := st.chat_input():
return prompt
llm = OpenAI(temperature=0, streaming=True)
tools = load_tools(["human", "ddg-search"], llm=llm, input_func=get_input)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
with st.chat_message("assistant"):
st_callback = StreamlitCallbackHandler(st.container())
response = agent.run(callbacks=[st_callback])
st.write(response)
In this code, get_input()
waits for the user to input a message in the Streamlit app. Once the user inputs a message, it is passed to the agent.run()
function as the prompt
argument. The StreamlitCallbackHandler
is used to display the thoughts and actions of the agent in the Streamlit app. The agent's response is then written to the Streamlit app using st.write(response)
.
As for the StreamlitCallbackHandler
, it is a callback handler that writes to a Streamlit app. It inherits from the BaseCallbackHandler
and is designed to handle callbacks for Streamlit. It has several methods to handle different events such as the start and end of a tool, the start and end of an LLM, and the start and end of a chain. It also handles errors and agent actions. When a tool ends, the current thought is completed. When an agent finishes, the current thought is completed with a final label indicating that the agent's final thought is complete.
I hope this helps! If you have any other questions, feel free to ask.
Best, Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
The problem that I have is that I can't use initialize_agent because I can't pass "memory" https://python.langchain.com/docs/modules/memory/agent_with_memory
and if I use
tools = load_tools(["human"], llm=llm, input_func=get_input)
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "chat_history", "agent_scratchpad"],
)
memory = ConversationBufferMemory(memory_key="chat_history")
llm_chain = LLMChain(llm=llm, prompt=prompt)
agent = ZeroShotAgent(llm_chain=llm_chain,
tools=tools,
verbose=True,
return_intermediate_steps=True,
)
agent_exec = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True, memory=memory
)
get_input is never called
Hi, @jsemrau,
I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, you are encountering an issue with your ReAct agent not stopping to receive human feedback when using Streamlit, despite it working correctly in Jupyter Notebook. Dosubot has provided a detailed response with code examples and references to relevant documentation, and you have responded, expressing that you are unable to use initialize_agent
and are encountering issues with the get_input
function not being called.
Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days. Thank you!
did you ever figure this out? @jsemrau
System Info
Langchain version = 0.0.286 Python=3.8.8 MacOs
I am working on a ReAct agent with Memory and Tools that should stop and ask a human for input.
I worked off this article in the documentation: https://python.langchain.com/docs/modules/memory/agent_with_memory
On Jupyter Notebook it works well when the agent stops and picks up the "Observation" from the human. Now I am trying to bring this over to Streamlit and am struggling with having the agent wait for the observation.
As one can see in the video, the output is brought over into the right streamlit container, yet doesn't stop to get the human feedback.
I am using a custom output parser and the recommended StreamlitCallbackHandler.
https://github.com/langchain-ai/langchain/assets/416379/ed57834a-2a72-4938-b901-519f0748dd95
Who can help?
No response
Information
Related Components
Reproduction
My output parser looks like this:
Expected behavior
The agent should wait for streamlit to create an input_chat and use this as the feedback from the "human" tool