Chainlit / chainlit

Build Conversational AI in minutes ⚡️
https://docs.chainlit.io
Apache License 2.0
7.13k stars 935 forks source link

Streaming=True not working when i integrate Langchain. #34

Closed altafr closed 1 year ago

altafr commented 1 year ago

import os from langchain import PromptTemplate, OpenAI, LLMChain import chainlit as cl

os.environ["OPENAI_API_KEY"] = "YOUR_OPEN_AI_API_KEY"

template = """Question: {question}

Answer: Let's think step by step."""

llm=OpenAI(temperature=0,streaming=True)

@cl.langchain_factory def factory():

prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm , verbose=True)

return llm_chain
willydouhard commented 1 year ago

At the moment the intermediate steps should be streamed and not the final response. Is that the case or even the intermediate steps are not being streamed? You can see https://github.com/Chainlit/chainlit/issues/7#issuecomment-1569969095 for context

gruckion commented 1 year ago

import os from langchain import PromptTemplate, OpenAI, LLMChain import chainlit as cl

os.environ["OPENAI_API_KEY"] = "YOUR_OPEN_AI_API_KEY"

template = """Question: {question}

Answer: Let's think step by step."""

llm=OpenAI(temperature=0,streaming=True)

@cl.langchain_factory def factory():

prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm , verbose=True)

return llm_chain

Since you are using an LLMChain the streamed content will be inside the working box that appears. If you ask it to tell you a 200 word story and then expand the box. You will see that the content within is streaming.

Please confirm if this is working for you so we can close this issue.

If instead you want to stream the content directly. Below is a simple example of how to feed a message directly back to the UI using streaming.

import chainlit as cl

from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage

llm = ChatOpenAI(streaming=True)

@cl.on_message
def main(message: str):
    cl.Message(content="") # Need to set the Message content to be blank before trigging the llm.
    llm([HumanMessage(content=message)])

The above work around using cl.Message(content="") is needed and will be fixed in a future release. It is required as of version 0.2.109

altafr commented 1 year ago

thanks