langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
94.2k stars 15.23k forks source link

Support for streaming in the langchain chains (eg., load_summarize_chain) #13644

Closed databill86 closed 7 months ago

databill86 commented 11 months ago

Feature request

Hello,

I'm not sure if this is already supported or not, I couldn't find anything in the documentation. Is there a way to make chains support streaming ? It would be nice if we can get it working with something like the load_summarize_chain.

Or something like this:


    doc_prompt = PromptTemplate.from_template("{page_content}")

    chain = (
            {
                "content": lambda docs: "\n\n".join(
                    format_document(doc, doc_prompt) for doc in docs
                )
            }
            | PromptTemplate.from_template("Summarize the following content:\n\n{content}")
            | OpenAI(
        temperature=1,
        model_name=llm_model,
        stream=True,
    )
            | StrOutputParser()
    )
    docs = [
        Document(
            page_content=split,
            metadata={"source": "https://en.wikipedia.org/wiki/Nuclear_power_in_space"},
        )
        for split in text.split()
    ]

    for partial_result in chain.invoke(docs):
        print(partial_result)

Motivation

I have long documents to summarize, so I would like to show the partial results in streaming mode and not make the user wait so long to get the final result.

Your contribution

No. If it's not possible, I'm willing to implement the summarization chain from scratch and use the OpenAI lib.

dosubot[bot] commented 11 months ago

🤖

It seems like you've got this under control, if you want help or have specific questions, let me know what I can do for you!

databill86 commented 11 months ago

I guess that's one of the reasons one should not use tools like this when you have custom needs. Because once you need a little tweak, you are stuck :)

dosubot[bot] commented 11 months ago

@baskaryan, it seems like the user is facing some challenges with customization, could you provide your expertise on this matter?