Open meliascosta opened 1 month ago
hm im not able to reproduce by just calling astream_events on the azure container apps graph. could you provide a full langraph graph to reproduce?
could you also try updating your packages
pip install -U langchain langchain-community langchain-openai langchain-postgres langgraph
would also be helpful to know what the outputs of the create_plot
and query_data
tools are
Thanks for the reply! I thought I had updated my packages but it turns out there was a dependency holding everything down. Just updated, new versions listed below. The problem persists.
OS: Linux OS Version: #1 SMP Thu Jan 11 04:09:03 UTC 2024 Python Version: 3.12.3 (main, May 23 2024, 22:52:35) [GCC 9.4.0]
langchain_core: 0.2.5 langchain: 0.2.3 langchain_community: 0.2.4 langsmith: 0.1.75 langchain_openai: 0.1.8 langchain_postgres: 0.0.6 langchain_text_splitters: 0.2.1 langgraph: 0.0.65
I can confirm the azure container apps graph runs ok. But my code still fails. I've hardcoded the tool outputs and the repr on the RawToolMessage and it still fails. Replacing with a regular ToolMessage makes it work again.
Pasting the whole code for the graph here:
from datetime import datetime
from typing import Annotated, Literal, Optional
from loguru import logger as log
from langchain_core.messages import SystemMessage, ToolMessage, AIMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables import (
RunnableConfig,
RunnableLambda,
ensure_config,
)
from langchain_core.tools import tool
from langgraph.graph import StateGraph
from langgraph.graph.message import AnyMessage, add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from pydantic import BaseModel
from typing_extensions import TypedDict
from twill_llm_engine.llms import model_selector
def update_dialog_stack(left: list[str], right: Optional[str]) -> list[str]:
"""Push or pop the state."""
if right is None:
return left
if right == "pop":
return left[:-1]
return left + [right]
class Widget(BaseModel):
name: str
type: str
data: dict
echarts_option: dict
sql_query: str
class State(TypedDict):
messages: Annotated[list[AnyMessage], add_messages]
widgets: dict
user_info: str
dialog_state: Annotated[
list[
Literal[
"assistant",
"generate_chart",
]
],
update_dialog_stack,
]
@tool
def create_plot(nl_query: str):
"Creates a plot from a natural language query. The natural language query should be as descriptive as possible."
config = ensure_config()
# output = plot_chain.invoke({"nl_query": nl_query}, config=config)
output = {"a": "b"}
return output
@tool
def query_data(nl_query: str):
"Brings information from database to answer a natural language query. The natural language query should be as descriptive as possible."
config = ensure_config()
# output = sql_agent.invoke({"nl_query": nl_query}, config=config)
output = {"a": "b"}
return output
primary_assistant_prompt = ChatPromptTemplate.from_messages(
[
SystemMessage(
"You are a helpful accounting and finance assistant. "
" Use the provided tools to query the database or create a plot. "
" If you want to create a plot or chart no need to query the database first, the plot tool already takes care of that. "
" If a search comes up empty, expand your search before giving up."
"\nCurrent time: {time}.",
),
MessagesPlaceholder("messages"),
]
).partial(time=datetime.now())
tools = [create_plot, query_data]
tools_by_name = {tool.name: tool for tool in tools}
class RawToolMessage(ToolMessage):
"""
Customized Tool message that lets us pass around the raw tool outputs (along with string contents for passing back to the model).
"""
raw: dict
"""Arbitrary (non-string) tool outputs. Won't be sent to model."""
tool_name: str
"""Name of tool that generated output."""
def execute_tools(state: State, config) -> dict:
"""
Execute the called tools
"""
messages = []
last_ai_msg = [
msg for msg in state["messages"] if isinstance(msg, AIMessage)
][-1]
for tool_call in last_ai_msg.tool_calls:
try:
result = tools_by_name[tool_call["name"]].invoke(
tool_call["args"], config
)
except Exception as e:
messages.append(
ToolMessage(
content=f"Error: {repr(e)}\n please fix your mistakes.",
tool_call_id=tool_call["id"],
tool_name=tool_call["name"],
)
)
continue
# if tool_call["name"] == "create_plot":
# result_repr = f"Created a plot with title: {result["plot_echarts_option"]["title"]["text"]}"
# elif tool_call["name"] == "query_data":
# result_repr = str({k: (v if k!= "data" else v[:5]) for k, v in result.items()})
log.info(result)
# messages.append(ToolMessage("something", tool_call_id=tool_call["id"]))
messages.append(
RawToolMessage(
"something",
raw={"a": "B"},
tool_call_id=tool_call["id"],
tool_name=tool_call["name"],
)
)
return {"messages": messages}
def call_model(state: State, config: RunnableConfig):
configurable = config["configurable"]
model = model_selector[configurable.get("main_model", "gpt3.5")]
assistant_runnable = primary_assistant_prompt | model.bind_tools(tools)
while True:
result = assistant_runnable.invoke(state)
# If the LLM happens to return an empty response, we will re-prompt it
# for an actual response.
if not result.tool_calls and (
not result.content
or isinstance(result.content, list)
and not result.content[0].get("text")
):
messages = state["messages"] + [
("user", "Respond with a real output.")
]
state = {**state, "messages": messages}
else:
break
return {"messages": result}
builder = StateGraph(State)
# Define nodes: these do the work
builder.add_node("assistant", call_model)
builder.add_node("tools", execute_tools)
# Define edges: these determine how the control flow moves
builder.set_entry_point("assistant")
builder.add_conditional_edges(
"assistant",
tools_condition,
)
builder.add_edge("tools", "assistant")
app = builder.compile()
Thanks again!
Wasn't able to figure out what was wrong but switching to stream_events "v2" resolved the issue
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
Here is the complete trace but the error is actually happening before:
Error in LogStreamCallbackHandler.on_llm_end callback: ValueError("Trying to deserialize something that cannot be deserialized in current version of langchain-core: ('langchain', 'schema', 'messages', 'RawToolMessage')")
Description
I'm trying to stream_events in a graph that uses a custom RawToolMessage class as described in this example notebook. If I use the .invoke method like in the original notebook it works OK, but when I call .stream_events I get an error:
Error in LogStreamCallbackHandler.on_llm_end callback: ValueError("Trying to deserialize something that cannot be deserialized in current version of langchain-core: ('langchain', 'schema', 'messages', 'RawToolMessage')")
System Info
System Information
Package Information
Packages not installed (Not Necessarily a Problem)
The following packages were not found: