Open chantszho opened 1 month ago
sorry about that , web request has 60s timeout, for over 2000+ tables database, we will optimization the request handle.
sorry about that , web request has 60s timeout, for over 2000+ tables database, we will optimization the request handle.
thanks for your reply, and how long can you optimize the request handle? the next vision?
This issue has been marked as stale
, because it has been over 30 days without any activity.
@chantszho During the 60 seconds of the request, will clicking other buttons on the page result in an error? It seems that the overall service will be stuck in these 60 seconds.
Search before asking
Operating system information
Linux
Python version information
DB-GPT version
main
Related scenes
Installation Information
[X] Installation From Source
[ ] Docker Installation
[ ] Docker Compose Installation
[ ] Cluster Installation
[ ] AutoDL Image
[ ] Other
Device information
Tesla V100-SXM2-32GB
Models information
LLM:qwen-max EMBEDDING_MODEL:text2vec-large-chinese
What happened
Hello. May I ask that when I was using dbgpt and connecting to the Hive database, an error of "Request error timeout of 60000ms exceeded" appeared on the web page, but it was still running normally without errors in the backend. I would like to ask how to solve this problem.
What you expected to happen
I think it's a problem on the web client-side code, but I'm not sure.
How to reproduce
I think you should find a database, which have a lot of data, have over 2000 tables. Then you try to connect the hive database, when you connect over 1 minute, it will appear the error at the upper right corner. However, you can see your python terminal, it still try to connect the database and have not error, it will finish when it load up all the data.
Additional context
No response
Are you willing to submit PR?