Canner / WrenAI

🚀 An open-source SQL AI (Text-to-SQL) Agent that empowers data, product teams to chat with their data. 🤘
https://getwren.ai/oss
GNU Affero General Public License v3.0
2.08k stars 220 forks source link

raise asyncio.TimeoutError from None TimeoutError #654

Open paulge1021 opened 2 months ago

paulge1021 commented 2 months ago

Describe the bug In Home tab at Ask input Question, return "Internal server error"

To Reproduce Steps to reproduce the behavior:

  1. Go to 'Home' tab
  2. Input Question
  3. Click on Ask
  4. See error 'Internal server error'

Expected behavior raise asyncio.TimeoutError from None TimeoutError

May be open the Timeout Seting

Screenshots If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

Wren AI Information

Additional context wrenai-wren-ai-service.log


generate [src.pipelines.ask_details.generation.generate()] encountered an error< Node inputs: {'generator': '<src.providers.llm.ollama.AsyncGenerator object at...', 'prompt': "<Task finished name='Task-1067' coro=<AsyncGraphAd..."}


Traceback (most recent call last): File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn await fn(fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(fn_kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/src/utils.py", line 118, in wrapper_timer return await process(func, *args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/utils.py", line 102, in process return await func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 182, in async_wrapper self._handle_exception(observation, e) File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 422, in _handle_exception raise e File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 180, in async_wrapper result = await func(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/pipelines/ask_details/generation.py", line 61, in generate return await generator.run(prompt=prompt.get("prompt")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/providers/llm/ollama.py", line 109, in run response = await session.post( ^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/aiohttp/client.py", line 608, in _request await resp.start(conn) File "/app/.venv/lib/python3.12/site-packages/aiohttp/client_reqrep.py", line 971, in start with self._timer: File "/app/.venv/lib/python3.12/site-packages/aiohttp/helpers.py", line 735, in exit raise asyncio.TimeoutError from None TimeoutError

Oh no an error! Need help with Hamilton? Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-1bjs72asx-wcUTgH7q7QX1igiQ5bbdcg

2024-09-05 05:41:11,206 - wren-ai-service - ERROR - ask-details pipeline - OTHERS: (ask_details.py:112) Traceback (most recent call last): File "/src/web/v1/services/ask_details.py", line 90, in ask_details generation_result = await self._pipelines["generation"].run( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/utils.py", line 118, in wrapper_timer return await process(func, *args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/utils.py", line 102, in process return await func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 182, in async_wrapper self._handle_exception(observation, e) File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 422, in _handle_exception raise e File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 180, in async_wrapper result = await func(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/pipelines/ask_details/generation.py", line 114, in run return await self._pipe.execute( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 368, in execute raise e File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 359, in execute outputs = await self.raw_execute(final_vars, overrides, display_graph, inputs=inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 320, in raw_execute raise e File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 315, in raw_execute results = await await_dict_of_tasks(task_dict) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 23, in await_dict_of_tasks coroutines_gathered = await asyncio.gather(coroutines) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 36, in process_value return await val ^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 91, in new_fn fn_kwargs = await await_dict_of_tasks(task_dict) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 23, in await_dict_of_tasks coroutines_gathered = await asyncio.gather(coroutines) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 36, in process_value return await val ^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn await fn(fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(fn_kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/src/utils.py", line 118, in wrapper_timer return await process(func, *args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/utils.py", line 102, in process return await func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 182, in async_wrapper self._handle_exception(observation, e) File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 422, in _handle_exception raise e File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 180, in async_wrapper result = await func(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/pipelines/ask_details/generation.py", line 61, in generate return await generator.run(prompt=prompt.get("prompt")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/providers/llm/ollama.py", line 109, in run response = await session.post( ^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/aiohttp/client.py", line 608, in _request await resp.start(conn) File "/app/.venv/lib/python3.12/site-packages/aiohttp/client_reqrep.py", line 971, in start with self._timer: File "/app/.venv/lib/python3.12/site-packages/aiohttp/helpers.py", line 735, in exit raise asyncio.TimeoutError from None TimeoutError INFO: 172.18.0.6:37524 - "GET /v1/ask-details/cbffda05-060c-4d60-955f-efae1116d820/result HTTP/1.1" 200 OK

cyyeh commented 2 months ago

@paulge1021 thanks for reporting this issue. The error is due to llm model timeout. We'll fix this issue and expose environment variables(LLM_TIMEOUT, EMBEDDER_TIMEOUT) for users to fill in custom timeout values

camilleconte8 commented 2 months ago

I am struggling with this too. I have set the LLM and embedder timeouts to 600 in all of the .env files as well as in my ollama.py file, and I get this log of errors in my wren-ai-service container:

Click to view Error Log ```plaintext 2024-09-19 14:14:18 ******************************************************************************** 2024-09-19 14:14:18 > filter_columns_in_tables [src.pipelines.ask.retrieval.filter_columns_in_tables()] encountered an error< 2024-09-19 14:14:18 > Node inputs: 2024-09-19 14:14:18 {'prompt': "

Any suggestions?