Open HappyWHoo opened 11 months ago
Are you not missing an input to the callback handler?
I have the same issue where res = await llm_math.acall(message.content, callbacks=[cl.LangchainCallbackHandler()])
is not valid anymore and deprecated. the agentExecutor in Langchain now uses ainvoke, astream etc. and the callbacks doesnt work.
Ive tried hundreds of different combinations trying to figure this out. current code I have thats not streaming:
res = await agent_executor.ainvoke({"input": message.content}, callbacks=[cl.LangchainCallbackHandler(stream_final_answer=True)])
using the model:
model = AzureChatOpenAI(
openai_api_version="2023-12-01-preview",
azure_deployment="gpt-4-Prev",
azure_endpoint=AZURE_OPENAI_ENDPOINT,
api_key=AZURE_OPENAI_API_KEY,
temperature=0,
streaming=True,
)
I have the same problem, can't stream final answer with LlamaCpp.
If somebody is looking for a solution to this issue, the following code is working with a LangChain OpenAI agent :
@cl.on_chat_start
async def on_chat_start():
# Define the agent
sellbotix = Sellbotix(model_name="gpt-4-turbo-preview", temperature=0.7)
runnable = await sellbotix.get_runnable()
cl.user_session.set("sellbotix", runnable)
chat_history = []
cl.user_session.set("chat_history", chat_history)
@cl.on_message
async def main(message: cl.Message):
sellbotix = cl.user_session.get("sellbotix")
chat_history = cl.user_session.get("chat_history")
msg = cl.Message(content="")
async for event in sellbotix.astream_events({"input": message.content, "chat_history": chat_history}, version="v1"):
kind = event["event"]
if kind == "on_chat_model_stream":
content = event['data']["chunk"].content
if content:
await msg.stream_token(content)
elif kind == "on_tool_start":
async with cl.Step(name=event['name']) as step:
step.input = f"Starting tool: {event['name']} with inputs: {event['data'].get('input')}"
step.output = f"Finishing tool: {event['name']} with inputs: {event['data'].get('input')}"
chat_history.append(HumanMessage(content=message.content))
chat_history.append(AIMessage(content=msg.content))
cl.user_session.set("chat_history", chat_history)
await msg.send()
Note that I don't use a callback as the builtin callback isn't working. Instead I simply use the new astream_events() method to get a stream from the chat model and push the chunks in the msg object.
Hey @jxraynaud , I have enabled cot
to full
in the config.toml, but using above code, I am not able to see my function tools call by langchain in the UI. When I was using non stream syntax, just graph.invoke, from the https://langchain-ai.github.io/langgraph/tutorials/customer-support/customer-support/#state-assistant, I was able to view which function calls are being made. Can you share some details on how to enable that function calls with streaming?
This is my code, it runs well
@cl.on_message
async def main(message: cl.Message):
......
agent.astream(
input={"messages": message_history},
config=RunnableConfig(
configurable={"thread_id": user.id},
recursion_limit=15,
callbacks=[
cl.AsyncLangchainCallbackHandler(
stream_final_answer=True,
force_stream_final_answer=True,
)
]
)
)
the agent is a AgentExecutor,unable to implement streaming output in chainlit.
Can anyone help me?