langchain-ai / langgraph

Build resilient language agents as graphs.
https://langchain-ai.github.io/langgraph/
MIT License
5.48k stars 867 forks source link

DOC: streaming-llm-tokens #584

Closed gbaian10 closed 3 months ago

gbaian10 commented 3 months ago

Issue with current documentation:

https://langchain-ai.github.io/langgraph/how-tos/streaming-tokens/#streaming-llm-tokens

Sample code does not produce expected results. ("on_chat_model_stream" part)

This is my code

from typing import Annotated, TypedDict

from dotenv import load_dotenv
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.graph import END, StateGraph, add_messages
from langgraph.prebuilt import ToolNode

load_dotenv()

class State(TypedDict):
    messages: Annotated[list, add_messages]

@tool
def search(query: str):
    """Call to surf the web."""
    return ["Cloudy with a chance of hail."]

tools = [search]
tool_node = ToolNode(tools)

model = ChatOpenAI(model="gpt-3.5-turbo", streaming=True)
model = model.bind_tools(tools)

def should_continue(state: State):
    messages = state["messages"]
    last_message = messages[-1]
    if not last_message.tool_calls:
        return "end"
    else:
        return "continue"

async def call_model(state: State):
    messages = state["messages"]
    response = await model.ainvoke(messages)
    return {"messages": response}

workflow = StateGraph(State)
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)
workflow.set_entry_point("agent")
workflow.add_conditional_edges(
    "agent",
    should_continue,
    {"continue": "action", "end": END},
)
workflow.add_edge("action", "agent")
app = workflow.compile()

async def main():
    inputs = [HumanMessage(content="what is the weather in sf")]
    async for event in app.astream_events({"messages": inputs}, version="v1"):
        kind = event["event"]
        if kind == "on_chat_model_stream":
            content = event["data"]["chunk"].content
            if content:
                print(content, end="|")
        elif kind == "on_tool_start":
            print("--")
            print(
                f"Starting tool: {event['name']} with inputs: {event['data'].get('input')}"
            )
        elif kind == "on_tool_end":
            print(f"Done tool: {event['name']}")
            print(f"Tool output was: {event['data'].get('output')}")
            print("--")

if __name__ == "__main__":
    import asyncio

    asyncio.run(main())

output:

--
Starting tool: search with inputs: {'query': 'weather in San Francisco'}
Done tool: search
Tool output was: ['Cloudy with a chance of hail.']
--

python = 3.10 langgraph = 0.0.62 langchain-core = 0.2.3 langchain-openai = 0.1.8

Idea or request for content:

No response

hinthornw commented 3 months ago

Hi there! tl;dr

async def call_model(state: State, config: RunnableConfig):
    messages = state["messages"]
    response = await model.ainvoke(messages, config)

instead of

async def call_model(state: State:
    messages = state["messages"]
    response = await model.ainvoke(messages)

Why? Python asyncio pre-3.11 doesn't support context propagation through tasks, so the streaming tracer that is used to stream out all the traced steps isn't seen by the model unless you explicitly pass the configuration from the node to nested runs. I'll update the doc to emphasize this.

LMK if this doesn't fix the issue for you and we can re-open this issue!

gbaian10 commented 3 months ago

Hi there! tl;dr

async def call_model(state: State, config: RunnableConfig):
    messages = state["messages"]
    response = await model.ainvoke(messages, config)

instead of

async def call_model(state: State:
    messages = state["messages"]
    response = await model.ainvoke(messages)

Why? Python asyncio pre-3.11 doesn't support context propagation through tasks, so the streaming tracer that is used to stream out all the traced steps isn't seen by the model unless you explicitly pass the configuration from the node to nested runs. I'll update the doc to emphasize this.

LMK if this doesn't fix the issue for you and we can re-open this issue!

Thank you, I succeeded.

By the way, the async nodes hyperlink at the top of the same page is currently invalid.(404)