Open ChengyangDu opened 7 months ago
Hi @ChengyangDu , i still need to add an example on how to do it -- haven't managed to get around to it yet, targeting next week (other users have asked about this as well)
What you're looking for is this: https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html#langchain_core.runnables.history.RunnableWithMessageHistory
It'll need to be combined with a per request modifier in add routes that helps to resolve the raw request object to a session id that corresponds to the given user. You can check how it's done in open gpts for inspiration
https://github.com/langchain-ai/opengpts
(Or you can wait for a few days -- and I'll try to post a more self contained example)
Sure. I'll wait and see. In a production-ready setup , I believe two features may need support for conversational memory:
Hi @ChengyangDu , i still need to add an example on how to do it -- haven't managed to get around to it yet, targeting next week (other users have asked about this as well)
What you're looking for is this: https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html#langchain_core.runnables.history.RunnableWithMessageHistory
It'll need to be combined with a per request modifier in add routes that helps to resolve the raw request object to a session id that corresponds to the given user. You can check how it's done in open gpts for inspiration
https://github.com/langchain-ai/opengpts
(Or you can wait for a few days -- and I'll try to post a more self contained example)
Hi! Based on your info provided. I've tried to distinguish histories by session by adopting RunnableWithMessageHistory & RedisChatMessageHistory.
However, such methods seems not support recording intermediate_steps, which is conflict with 'hwchase17/react-chat'. In the end, codes below will fail
prompt = hub.pull("hwchase17/react-chat")
prompt = prompt.partial(
tools=render_text_description(tools),
tool_names=", ".join([t.name for t in tools]),
)
llm_with_stop = llm.bind(stop=["\nObservation"])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
"chat_history": lambda x: x["chat_history"],
}
| prompt
| llm_with_stop
| ReActSingleInputOutputParser()
)
chain_with_history = RunnableWithMessageHistory(
agent,
lambda session_id: RedisChatMessageHistory(session_id, url="redis://localhost:6379/0"),
input_messages_key="input",
history_messages_key="chat_history",
)
stack info:
File "assistant_chain_distributed.py", line 180, in <lambda>
"agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
KeyError: 'intermediate_steps'
So any sugestions of that? Great thx!
Merry Christmas. I had the same problem. Any update on this ?
MemGPT: The Future of LLMs with Unlimited Memory https://github.com/cpacker/MemGPT. strongly recommend introduce this one.
looking for the solution!!! Anyone able to manage session id based chat history or pipeline state(langgraph).
ConversationBufferMemory is useful in conversational agents, like codes below
However, when integrated with langserve, it may be difficult to distinguish the context of each users. Is there any good practice for such problem under langserve usage?