Open pokidyshev opened 1 year ago
hi @pokidyshev,
1) Create a runnable version of your executor. Check LCEL docs on how to do it. You'll likely need to make it configurable
to make sure that you can change the memory at run time based on user identity (check example with configurable
)
2) in add_routes
there's a per request modifier parameter that you can use to add user specific information to the config
.
We don't have good documentation yet to show how to do these, but you can look at implementation in https://github.com/langchain-ai/opengpts for reference
Meet same problem and blocked, is it possible to give a sample that can use this memory config through the API?
@JevenZhou will do -- will try to do it this week
@eyurtsev Thanks! Looking forward to it.
Haven't gotten around to full example yet, but we added this to the code base last week which should be fairly helpful: https://api.python.langchain.com/en/latest/schema.runnable/langchain.schema.runnable.history.RunnableWithMessageHistory.html
Hi, @eyurtsev! Thanks for the update!
I've managed to attach message history based on session_id
by using RunnableWithMessageHistory
. Though I had to patch it so it saves intermediate steps too.
I'm now stuck with customizing agent tools based on user's access_token
. I need to create a new instance of APIWrapper(access_token)
on each request and then create a new set of tools from this instance and pass them to the agent. Any ideas how that can be achieved?
Now my code looks like this:
def init_chat_history(destination: str, session_id: str) -> BaseChatMessageHistory:
return FirestoreChatMessageHistory(
destination=destination,
session_id=session_id,
max_messages=5,
)
system_message = SYSTEM_MESSAGE.format(warning="", custom_prompt="")
prompt = ChatPromptTemplate.from_messages(
[
("system", system_message),
MessagesPlaceholder(variable_name="history"),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
# TODO: tools must be updated on each request
api_wrapper = PipedriveAPIWrapper(access_token)
tools = init_tools(api_wrapper, TOOLS, include_custom_fields=True)
llm = create_azure_gpt4()
llm_with_tools = llm.bind(functions=[format_tool_to_openai_function(t) for t in tools])
agent = (
{
"input": lambda x: x["input"],
"history": lambda x: x["history"],
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
executor = AgentExecutor(
agent=agent, # type: ignore
tools=tools,
max_iterations=15,
handle_parsing_errors=True,
return_intermediate_steps=True,
tags=["pipedrive"],
# metadata=hints_user,
verbose=True,
)
executor_with_history = RunnableWithMessageHistory(
executor, # type: ignore
partial(init_chat_history, "pipedrive"),
history_messages_key="history",
)
class Input(BaseModel):
input: str
class Output(BaseModel):
output: str
app = FastAPI(title="LangChain Server", version="1.0")
add_routes(
app,
executor_with_history.with_types(input_type=Input, output_type=Output),
path="/pipedrive",
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="localhost", port=8000)
Hi, team-langchain,
I have an agent that uses memory, user-authentication as well as function calling. I'd like to migrate it to langserve in production but couldn't find anything as complex as my case in the examples in docs. So I got stuck and need help. Could you please give me an advise how to convert this code to LCEL?
agent.py:
app.py
TLDR what this code does is:
Do you have any ideas how to turn this into a langserve project?