Chainlit / chainlit

Build Conversational AI in minutes ⚡️
https://docs.chainlit.io
Apache License 2.0
7.11k stars 932 forks source link

Why is the loading icon not showing? #1254

Open frei-x opened 2 months ago

frei-x commented 2 months ago

Describe the bug image

To Reproduce Steps to reproduce the behavior:

  1. Ask a question
  2. Wait for an answer

chainlit>=1.1.400 has problems, the old version works fine

expected image

software

Python 3.10.12

Linux version 5.15.0-105-generic (buildd@lcy02-amd64-007) (gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #115-Ubuntu SMP Mon Apr 15 09:52:04 UTC 2024

    ai_msg = cl.Message(content="")
    await ai_msg.send()

    if len(message_history) > MAX_ROUNDS:
        message_history = message_history[:MAX_ROUNDS //
                                          2] + message_history[-MAX_ROUNDS//2:]
    stream = llm.stream([system_message] +
                        message_history)
    if (DEBUG):
        print("history", str(message_history))
    full_answer = ""
    for part in stream:
        if content := part.content or "":
            full_answer += content
            if (len(full_answer) == 1):
                gen_time_end = time.time()
                print(gen_time_end - gen_time_start)
            await ai_msg.stream_token(content)
    message_history.append({"role": "assistant", "content": ai_msg.content})
    cl.user_session.set("message_history", message_history)
    await ai_msg.update()

I don't know anything else, config.toml is the default,

When streaming a conversation, before the first word appears, chainlit>=1.1.400 will not display the loading icon

dokterbob commented 2 months ago

Hi @frei-x, thanks for the feedback? Could you please follow the feedback format for bug reports, it will make it easier for us to assess and replicate your issue.

Specifically, we need to know the expected behaviour. What should be shown in the red box? What does it look like prior to 1.1.400? What platform are you using? Does this problem occur on other platforms etc. etc.?

frei-x commented 2 months ago

When streaming a conversation, before the first word appears, chainlit>=1.1.400 will not display the loading icon

Sorry, I updated the issue and have not tried other environments yet.

When streaming a conversation, before the first word appears, chainlit>=1.1.400 will not display the loading icon

3eeps commented 2 months ago

i have the same issue.

julesterrien commented 2 months ago

Same - loader icon used to show after a question is sent by user. This seems to be a recent regression

dokterbob commented 2 months ago

Thanks for the feedback @frei-x @3eeps @julesterrien. Giving this some priority.

If we could have a e2e test for the loader that would really be gold, like 80% of the fix. If someone wants to this (tests + fix) on please give a shout here before starting on the PR so nobody does double work.

stephenrs commented 2 months ago

I'm also seeing this with 1.1.403rc0. 1.1.402 doesn't exhibit the issue for me.

dokterbob commented 2 months ago

On current main (86798bc), with resume-chat from the cookbook, I am seeing the following loader during generation: Screenshot 2024-08-29 at 12 09 45

To me, this seems expected behaviour.

Did you install dependencies and built the UI before testing?

poetry -C ./backend install && pnpm install && pnpm buildUi

If not, could you please supply a minimal example app and specific steps to replicate the issues you're experiencing? @stephenrs @frei-x @3eeps @julesterrien

The thing is there seem to be specific conditions under which this issue pops up.

Similarly, on the latest release (1.1.402), with the same app, I am seeing the following: image

Likewise, using a minimal example from the docs, on 1.1.402, I'm not seeing a loading indicator before first response: image

Also, a minimal streaming example from the docs doesn't yield a 'loading before first generation' indicator on 1.1.402: image

stephenrs commented 2 months ago

I could be wrong, but I assumed this issue was refering to the animated "busy cursor" loading graphic that is shown when the bot is "thinking" of a reply - shown in the attachement. It appears in 1.1.402, but I haven't seen it in 1.1.403rc0. It's a comforting element of UX, without it "feels" like something has gone wrong.

You can use the same minimal app I attached to #1262 to test.

https://github.com/user-attachments/assets/704432ff-82e0-4bce-bbe3-bfd8b75acf79

MA3CIN commented 2 months ago

I also have this issue, and consider it critical - without the loading icon the app doesn't feel responsive.

GalRabin commented 1 month ago

@MA3CIN any update on this issue?

stephenrs commented 1 month ago

@MA3CIN Agreed. I think it's a fair and reasonable assessment that issues like this one should be given higher priority than moving files around on hard disks as in #1243

MA3CIN commented 1 month ago

@GalRabin I am still experiencing this issue on the 3 latest ChainLit versions... They are still working on it, as can be seen here https://github.com/Chainlit/cookbook/issues/136 and here https://github.com/Chainlit/cookbook/issues/85. I am in no way afiliated with this project, but an issue like this should be critical - without it, the UI is completely unresponsive, especially if you run more advanced prompts or function chaining...

vaclcer commented 1 month ago

In my experience, the blinking dot is visible after the first streaming token is added into the message. So what I do as a workaround it that I put an empty streaming token " " into message (await msg.stream_token(" ")), blinking dot appears and then I start doing other things (like running the pipeline and generating some meaningful tokens).

dokterbob commented 1 month ago

@MA3CIN Agreed. I think it's a fair and reasonable assessment that issues like this one should be given higher priority than moving files around on hard disks as in #1243

We're prioritising (at our discretion) making it easier for us to review PR's. Big part of that is disentangling everything from everything so we can start with automated testing.

In general, if you'd like an issue to get fixed, the way to get there:

  1. Provide easily reproducible test steps, preferably (in order): a. Python unit tests b. Cypress E2E tests c. Manual replication steps.
  2. A PR with code (only) remediating the issue at hand.

With the aforementioned provided, we usually get your contribution merged in ~1-3 days.

Without being able to at least consistently replicate it, we are literally unable to remedy issues. As such, I request that you help us help you, over insisting that your particular issue is critical to you or how the free community support you're provided with is inadequate.

On that note, @vaclcer's comment seems a strong clue. If someone rolls a small snippet of an app demonstrating the issue and demonstrating that it's gone after sending an empty token etc. that should help me or other community members to push this forward.

stephenrs commented 1 month ago

@dokterbob I think it is important to recognize the reality that not everyone in this community has the time or skillset to become a member of your development team. For example, some people may not know how to use Cypress or other tools in your build pipeline, so maybe it's not quite sane or reasonable to expect folks to learn it simply to report an issue. We don't work for you Mathijs, and you and Willy are the only people getting compensated to work on this project.

So, I encourage anyone who encounters an issue to report it in the most clear and unambiguous way they can, so members of the community can be made aware of the issue. This is the first priority.

Then, if someone has the time, skill, and desire to fix it themselves, they should. Otherwise, the ultimate responsibility to deliver a viable product falls to the people at the Chainlit/LiteralAI company who will benefit the most from any fixes to their software.

yyoussef11 commented 2 weeks ago

@dokterbob apparently that with the newest versions of chainlit (v1.2.0) if you set: Settings.callback_manager=CallbackManager([cl.LlamaIndexCallbackHandler()]) you will have the "loading" like experience to show the steps taken by chainlit when retrieving from the vector store or when using the LLM.

However, if you do not set it, you will not have the "loading" like experience.

it would be nice to add an action: cl.spinner (similar to what is existing in streamlit)

dokterbob commented 2 weeks ago

@dokterbob apparently that with the newest versions of chainlit (v1.2.0) if you set: Settings.callback_manager=CallbackManager([cl.LlamaIndexCallbackHandler()]) you will have the "loading" like experience to show the steps taken by chainlit when retrieving from the vector store or when using the LLM.

However, if you do not set it, you will not have the "loading" like experience.

it would be nice to add an action: cl.spinner (similar to what is existing in streamlit)

Ah. That explains why I might have been unable to replicate it! If perhaps you could give me a minimal project and steps to replicate this, finally, we can start addressing the issue! 🙏🏼

In addition, perhaps a sensible workaround is to amend documentation to clarify this behaviour; loading indicators require callback handlers to be configured!

@stephenrs We do our utmost best to support the community and am fully aware of differences in skill levels. I am merely stating that: the easier bug reporters make it for us to replicate issues and fixes, the faster we can resolve them.

To make this very concrete: I've already spent several hours on this issue alone and have thus far not been able to replicate it locally. Considering the large amount of users facing this problem, it's clear that it bothers many users -- yet somehow we have not been able to use any information provided here (thus far!) to replicate it!

The gold standard is sharing a full code example plus specific steps to replicate an issue (e.g. expectation versus actual result).

Similarly, the easier it is for us to ensure that a fix indeed resolves the issue at hand is to have either clear steps for us to replicate the issue and demonstrate it's resolution or (preferably) unit or e2e tests demonstrating it.

stephenrs commented 2 weeks ago

@dokterbob I told you on August 29th that the problem was introduced with the release of 1.1.403rc0. So, all you needed to do was review the commits for that release, which were quite minimal, to find out what might have introduced the problem.

I did this myself, which is how I knew that Willy had broken the ROOT PATH functionality with his changes to server.py. We're not here to do your job for you. You are a software company and we are customers. Customers don't always know how to perfectly specify the exact conditions that will lead you to a solution for your broken code. That's your responsibility.

So, having clear steps to reproduce isn't the only way to solve problems when you are really interested in solving them rather than shifting responsibility to people who didn't write the broken code.

If I've said anything that doesn't align with the truth, please feel free to speak to it directly. Please don't waste anyone's time with your emotional bickering that ignores the facts.

stephenrs commented 2 weeks ago

@dokterbob As a sincere suggestion in case you haven't already tried it: Since so many people are having this problem, you might be able to surface it with a new/clean install outside of your (typically customized) dev environment. Personal dev environments can be sneaky about hiding problems.

Also, the steps I detailed on #1262 will still work to reproduce the problem as I noted on Aug 29th...if you're willing to take the simple step of setting up an OpenAI assistant...which seems like a fundamental requirement to be able to effectively support this project anyway.

If you want this solved, you already have everything you need to solve it.

gcleaves commented 1 week ago

I wonder if there is any correlation to this problem, which started at about the same time: https://github.com/Chainlit/chainlit/issues/1437