JoshuaC215 / agent-service-toolkit

Full toolkit for running an AI agent service built with LangGraph, FastAPI and Streamlit
https://agent-service-toolkit.streamlit.app
MIT License
247 stars 40 forks source link

Error in callback coroutine: NotImplementedError('TokenQueueStreamingHandler does not implement `on_chat_model_start`') #33

Closed zongking123 closed 1 day ago

JoshuaC215 commented 1 week ago

Hi @zongking123 can you share a full stack trace and the LangChain version installed?

Are you using the original code or a fork?

Otherwise won’t be able to help

JoshuaC215 commented 1 day ago

Going to close until there's a clearer repro description and details

vshkl commented 9 hours ago

Some context about this issue from my side.

Error in logs:

INFO: "POST /stream HTTP/1.1" 200 OK
Error in callback coroutine: NotImplementedError('TokenQueueStreamingHandler does not implement `on_chat_model_start`')

Call trace:

So, the error happens here (in .../services/service.py:

class TokenQueueStreamingHandler(AsyncCallbackHandler):
    """LangChain callback handler for streaming LLM tokens to an asyncio queue."""

    def __init__(self, queue: asyncio.Queue):
        self.queue = queue

    async def on_llm_new_token(self, token: str, **kwargs) -> None:
        if token:
            await self.queue.put(token)

which later used in the same file here:

async def message_generator(user_input: StreamInput) -> AsyncGenerator[str, None]:
    ...
    output_queue = asyncio.Queue(maxsize=10)
    if user_input.stream_tokens:
        kwargs["config"]["callbacks"] = [TokenQueueStreamingHandler(queue=output_queue)]
    ...

which it turn is used by /stream endpoint:

@app.post("/stream")
async def stream_agent(user_input: StreamInput):
    ...
    return StreamingResponse(
        message_generator(user_input), media_type="text/event-stream"
    )

Versions:

AsyncCallbackHandler comes from langchain-core. It's declared like this in requirements.txt: langchain-core~=0.2.26, and is resolved to 0.2.28 in my case:

❯ pip show langchain-core
Name: langchain-core
Version: 0.2.28
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author: 
Author-email: 
License: MIT
Location: /.../site-packages
Requires: jsonpatch, langsmith, packaging, pydantic, PyYAML, tenacity, typing-extensions
Required-by: langchain, langchain-anthropic, langchain-community, langchain-experimental, langchain-groq, langchain-openai, langchain-text-splitters, langgraph, langgraph-checkpoint

I created my project from template ~3 weeks ago, and I can't recall this error in the logs, but I wouldn't vouch for it because it does not really break anything badly. I briefly looked through release notes of langchain-core and I didn't spot any change between 0.2.26 and 0.2.28 versions (compare) that could cause this error to all of a sudden.

Once I formally implement on_chat_model_start (just pass) in TokenQueueStreamingHandler – it's gone.

@JoshuaC215, please, let me know if you need any additional info.