Open settur1409 opened 8 months ago
Did you managed to resolve the problem?
Hey, I don't know if that's a good fix or bad, I added a while True in main file so, the chainlit instance didn't get killed. I continued with that, rather digging so much. I was expecting the service to run, help me if there is someother (better way) to keep the service running.
yes @settur1409 I am also facing the same problem while running the llama3 model
I am facing the issue while using chromadb and openai in aws eks .. unable to find the fix..
I suspect, its the issue with chainlit and nothing related to LLM, or vector store, as a work around you can have a while True in main file.
Can you share how do you put this while True? It was on the on_message function? thanks @settur1409
Hope this should help
imports ...
@on_chat_start async def chat_start() ...
@cl.on_message async def main() wait for user input call chain invoke
if name=="main": while True: pass
Hope this should help
imports ...
@on_chat_start async def chat_start() ...
@cl.on_message async def main() wait for user input call chain invoke
if name=="main": while True: pass
I hope this helps others facing the same error. I resolved the issue by using concurrent.futures, which creates a separate thread to handle the task.
Below is a simple code example demonstrating how to run a text_to_speech function that takes time to execute. This solution resolved the 'Could not reach the server' error for me:
import asyncio import concurrent.futures
async def speak_async(answer: str): loop = asyncio.get_running_loop() with concurrent.futures.ThreadPoolExecutor() as pool: await loop.run_in_executor(pool, text_to_speech, answer)
Describe the bug I am trying to bringup a simple chatbot from chainlit. I am using local LLM. Hence there is some time delay in response. My problem is that, very frequently I am getting "could not reach the server" error. though I made LLM call to run via syncio, I am still facing same issue. Can some one help me on this. Below is the code that I am using to get response from llm
To Reproduce Steps to reproduce the behavior: running below code. @cl.on_message async def main(message: cl.Message): await cl.Message( content=f"Received: {message.content}", ).send()
Expected behavior A clear and concise description of what you expected to happen.
Screenshots If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Additional context Add any other context about the problem here.