langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
89.41k stars 14.1k forks source link

ChatOllama stream method raises warn_deprecated NotImplementedError #14980

Closed v-byte-cpu closed 2 months ago

v-byte-cpu commented 7 months ago

System Info

langchain version: v0.0.352 python version: 3.11

Hi there! After that PR https://github.com/langchain-ai/langchain/pull/14713 was merged, I started getting errors in stream() method:

File .../lib/python3.11/site-packages/langchain_core/_api/deprecation.py:295, in warn_deprecated(since, message, name, alternative, pending, obj_type, addendum, removal)
    293 if not removal:
    294     removal = f"in {removal}" if removal else "within ?? minor releases"
--> 295     raise NotImplementedError(
    296         f"Need to determine which default deprecation schedule to use. "
    297         f"{removal}"
    298     )
    299 else:
    300     removal = f"in {removal}"

NotImplementedError: Need to determine which default deprecation schedule to use. within ?? minor releases

I guess this decorator must have a pending=True argument.

Who can help?

@hwchase17 @agola11

Information

Related Components

Reproduction

from langchain.chat_models import ChatOllama
llm = ChatOllama(
    model="openchat:7b-v3.5-1210-q4_K_M",
)
for chunk in llm.stream("Where were the Olympics held?"):
    print(chunk, end="", flush=True)

Expected behavior

successful streaming output from llm

dosubot[bot] commented 7 months ago

🤖

It's great to see that you've identified a potential issue and a possible solution related to the stream() method in LangChain. Your observation about adding a pending=True argument to the decorator seems like a step in the right direction.

If you're interested in contributing to LangChain, I would encourage you to go ahead and create a pull request with your proposed fix. The maintainers will review your changes and determine if it aligns with the project's goals.

Thank you for your interest in improving LangChain!


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

tylertitsworth commented 6 months ago

I'm getting the same issue in my app. Chainlit offers a generic langchain provider that streams the model backend request provided by an llm input. When I add support for updating slider inputs in a playground request like so:

result = await make_async(self.llm.stream)(
    input=messages, # User Prompt
    **request.prompt.settings # User Inputs like temperature, top_K, etc.
)

I get the error above.

When I downgrade to v0.0.350 I also receive the same error, so I am unsure if the issue is in the specific PR you mentioned.

The out-of-the-box GenericLangchainProvider in chainlit for v0.0.352 doesn't work because of this issue.

Edit: On followup testing, it appears that v0.0.349 is the release that is working for me.

https://github.com/langchain-ai/langchain/assets/43555799/827dfcfd-d66f-42ec-aa47-c4ae1df29bbf

romellfudi commented 6 months ago

The issue still persists in the latest version. Please provide guidance on how to handle streaming in ChatOllama. @v-byte-cpu do you have any updates?

tylertitsworth commented 6 months ago

Issue was fixed in #15104

kesavan22 commented 6 months ago

I'm facing this issue for GooglePalm in the latest version