Chainlit / chainlit

Build Conversational AI in minutes ⚡️
https://docs.chainlit.io
Apache License 2.0
6.67k stars 860 forks source link

getting 'could not reach the server' error #845

Open settur1409 opened 5 months ago

settur1409 commented 5 months ago

Describe the bug I am trying to bringup a simple chatbot from chainlit. I am using local LLM. Hence there is some time delay in response. My problem is that, very frequently I am getting "could not reach the server" error. though I made LLM call to run via syncio, I am still facing same issue. Can some one help me on this. Below is the code that I am using to get response from llm

To Reproduce Steps to reproduce the behavior: running below code. @cl.on_message async def main(message: cl.Message): await cl.Message( content=f"Received: {message.content}", ).send()

print("llm invoked, wait for response")
query = message.content
response = asyncio.run(agent_executor.invoke(input=query))
print("response received", response)
await cl.Message(
    content=f"Bot Answer: {response}"
).send()

Expected behavior A clear and concise description of what you expected to happen.

Screenshots If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

Additional context Add any other context about the problem here.

saadjelbini commented 3 months ago

Did you managed to resolve the problem?

settur1409 commented 3 months ago

Hey, I don't know if that's a good fix or bad, I added a while True in main file so, the chainlit instance didn't get killed. I continued with that, rather digging so much. I was expecting the service to run, help me if there is someother (better way) to keep the service running.

khadar1020 commented 2 months ago

yes @settur1409 I am also facing the same problem while running the llama3 model

abhinav-901 commented 2 months ago

I am facing the issue while using chromadb and openai in aws eks .. unable to find the fix..

settur1409 commented 2 months ago

I suspect, its the issue with chainlit and nothing related to LLM, or vector store, as a work around you can have a while True in main file.

httplups commented 2 months ago

Can you share how do you put this while True? It was on the on_message function? thanks @settur1409

settur1409 commented 2 months ago

Hope this should help

imports ...

@on_chat_start async def chat_start() ...

@cl.on_message async def main() wait for user input call chain invoke

if name=="main": while True: pass