langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
95.63k stars 15.52k forks source link

StreamlitCallbackHandler thinking.. / spinner Not stopping #11398

Closed quantumalchemy closed 5 months ago

quantumalchemy commented 1 year ago

System Info

python:3.10.13 bookworm (docker) streamlit Version: 1.27.1 langchain Version: 0.0.306

Who can help?

No response

Information

Related Components

Reproduction

Display assistant response in chat message container

with st.chat_message("🧞‍♂️"):
    message_placeholder = st.empty()
    cbh = StreamlitCallbackHandler(st.container())
    AI_response = llm_chain.run(prompt,callbacks=[cbh])

Expected behavior

the "Thinking.." spinner STOPS or hides after LLM finishes its response

No Parameters I can find here https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html that would affect this

dosubot[bot] commented 1 year ago

🤖

Hello,

Thank you for reaching out with your issue. Based on your description, it seems like you want to hide the "Thinking.." spinner in StreamlitCallbackHandler after the LLM finishes its response in LangChain.

To achieve this, you can modify the complete method in the LLMThought class. Currently, the complete method updates the label of the container when the thought process is completed. If you want to hide the "Thinking.." spinner, you can modify the complete method to not update the label when the thought process is completed.

Here is the modified complete method:

def complete(self, final_label: Optional[str] = None) -> None:
    """Finish the thought."""
    self._state = LLMThoughtState.COMPLETE
    if self._collapse_on_complete:
        self._container.update(new_expanded=False)

In this modified version, the complete method no longer updates the label of the container when the thought process is completed. This means that the "Thinking.." spinner will not be displayed after the LLM finishes its response.

Please note that this modification should be done in the streamlit_callback_handler.py file in the langchain/callbacks/streamlit/ directory of the LangChain codebase.

Also, I noticed that you're using LangChain version 0.0.306. The latest version is 0.0.308. It might be beneficial to update to the latest version as it may contain improvements or bug fixes that could potentially resolve your issue.

I hope this helps! If you have any other questions or need further clarification, please don't hesitate to ask.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

quantumalchemy commented 1 year ago

thanks but not working - seems like complete() is not being called anyway to show generic .. strip out spinner AND "thinking.." just display the LLM stream and nothing else? I see you can use custom "thought_labeler " can you give me example of a custom thought_labeler that would have No " spinner and No "thinking.." just stream?

heiningair commented 1 year ago

I have the same issue. It never stops getting out of the "thinking..." aka "running" state. It seems like this is only possible for agents, but not for chains.

Have you been able to find a workaround for that?

MiNeves00 commented 1 year ago

Same issue on my side. Thinking does not stop until i give it a new input.

fcasadome commented 1 year ago

I used st.rerun() to redraw the conversation to fix this, but it enters some infinite loop after upgrading to the new streamlit version 1.28.0

TheDarkTrumpet commented 10 months ago

I had the same issue. I decided to debug this further.

I created a fork and committed the change in: https://github.com/TheDarkTrumpet/langchain

I'm still pretty new to LangChain, so before I do a pull request, I will likely need some help verifying the changes.

The modification was on the on_chain_end in the streamlit_callback.py file to change the pass to the following:

if 'answer' in outputs:
     self.on_agent_finish(outputs['answer'])
     self._parent_container.markdown(f'{outputs["answer"]}')
Hadi2525 commented 10 months ago

I have this solution that probably can be a remedy to replace StreamlitCallbackHandler until the issue is fixed.

You set your handler to StreamingStdOutCallbackHandler, you can parse the stdout to the streamlit using the context manager. Here is an example code: `from io import StringIO from contextlib import contextmanager, redirect_stdout

@contextmanager def st_capture(output_func): with StringIO() as stdout, redirect_stdout(stdout): old_write = stdout.write

    def new_write(string):
        ret = old_write(string)
        output_func(stdout.getvalue())
        return ret

    stdout.write = new_write
    yield

When calling your LLM Chain, run this:


with st.chat_message("assistant"):
            output = st.empty()
            with st_capture(output.info):
                chain.invoke(user_prompt)
`
ucola commented 9 months ago

Any progress on this issue? @Hadi2525 can you do a whole example with your code?

BioStarr commented 8 months ago

Any info on this, since I got same bug.

KedoKudo commented 8 months ago

I made a tiny PR that should resolve this issue:https://github.com/streamlit/streamlit/pull/8311

In the meantime, please use the following code to force an update to the container status:

callback = StreamlitCallbackHandler(st.container())
response = llm.qa.invoke(user_prompt)
callback._current_thought._container.update(
    label="",
    state="complete",
    expanded=True,
)
BioStarr commented 8 months ago

Works thanks :)

After change integration what part of langchain should I upgrade with pip?

KedoKudo commented 8 months ago

Works thanks :)

After change integration what part of langchain should I upgrade with pip?

The change is in streamlit, langchain_community is providing a thin wrapper for the callback provided by streamlit.

BioStarr commented 8 months ago

Thanks, this works beautifully now. 5/5 stars for fix.

Could I possibly bother you for list of arguments since while I am waiting for bot to respond there is empty text displayed until streaming starts, and I would like to see how to remove it.

image

KedoKudo commented 8 months ago

Thanks, this works beautifully now. 5/5 stars for fix.

Could I possibly bother you for list of arguments since while I am waiting for bot to respond there is empty text displayed until streaming starts, and I would like to see how to remove it.

image

You are asking to modify the default behaviors of st.container, the only solution I can think of would be doing a monkey patching until streamlit provide a way to modify the default text in an empty container:

callback = StreamlitCallbackHandler(st.container(), collapse_completed_thoughts=False)
# save the original func
org_on_llm_start = callback.on_llm_start

def new_on_llm_start(*args, **kwargs):
    org_on_llm_start(*args, **kwargs)
    callback._current_thought._container.write("")

# patch
callback.on_llm_start = new_on_llm_start