khoj-ai / khoj

Your AI second brain. Self-hostable. Get answers from the web or your docs. Build custom agents, schedule automations, do deep research. Turn any online or local LLM into your personal, autonomous AI (e.g gpt, claude, gemini, llama, qwen, mistral).
https://khoj.dev
GNU Affero General Public License v3.0
14.11k stars 703 forks source link

[FIX]The deployment was not successful, regardless of whether I used Ollama openai, All show uvicorn. error: connection closed #782

Closed leoleelxh closed 1 month ago

leoleelxh commented 5 months ago

The deployment was not successful, regardless of whether I used Ollama openai, All show uvicorn. error: connection closed。

server-1 | [15:31:04.245190] DEBUG uvicorn.error: = connection is protocol.py:1227 server-1 | CLOSING server-1 | [15:31:04.245975] DEBUG uvicorn.error: > CLOSE 1000 (OK) [2 protocol.py:1178 server-1 | bytes] server-1 | [15:31:04.246598] DEBUG uvicorn.error: = connection is protocol.py:1497 server-1 | CLOSED server-1 | [15:31:04.247182] DEBUG uvicorn.error: ! failing connection protocol.py:1412 server-1 | with code 1006 server-1 | [15:31:04.247735] ERROR uvicorn.error: closing handshake failed server.py:248 server-1 | ╭─ Traceback (most recent call last) ─╮ server-1 | │ /usr/local/lib/python3.10/dist-pack │ server-1 | │ ages/websockets/legacy/server.py:24 │ server-1 | │ 4 in handler │ server-1 | │ │ server-1 | │ 241 │ │ │ │ raise │ server-1 | │ 242 │ │ │ │ server-1 | │ 243 │ │ │ try: │ server-1 | │ ❱ 244 │ │ │ │ await self.c │ server-1 | │ 245 │ │ │ except Connectio │ server-1 | │ 246 │ │ │ │ raise │ server-1 | │ 247 │ │ │ except Exception │ server-1 | │ │ server-1 | │ /usr/local/lib/python3.10/dist-pack │ server-1 | │ ages/websockets/legacy/protocol.py: │ server-1 | │ 770 in close │ server-1 | │ │ server-1 | │ 767 │ │ """ │ server-1 | │ 768 │ │ try: │ server-1 | │ 769 │ │ │ async with async │ server-1 | │ ❱ 770 │ │ │ │ await self.w │ server-1 | │ 771 │ │ except asyncio.Timeo │ server-1 | │ 772 │ │ │ # If the close f │ server-1 | │ 773 │ │ │ # are full, the │ server-1 | │ │ server-1 | │ /usr/local/lib/python3.10/dist-pack │ server-1 | │ ages/websockets/legacy/protocol.py: │ server-1 | │ 1236 in write_close_frame │ server-1 | │ │ server-1 | │ 1233 │ │ │ │ data = close │ server-1 | │ 1234 │ │ │ │ server-1 | │ 1235 │ │ │ # 7.1.2. Start t │ server-1 | │ ❱ 1236 │ │ │ await self.write │ server-1 | │ 1237 │ │ server-1 | │ 1238 │ async def keepalive_ping │ server-1 | │ 1239 │ │ """ │ server-1 | │ │ server-1 | │ /usr/local/lib/python3.10/dist-pack │ server-1 | │ ages/websockets/legacy/protocol.py: │ server-1 | │ 1209 in write_frame │ server-1 | │ │ server-1 | │ 1206 │ │ │ │ f"Cannot wri │ server-1 | │ 1207 │ │ │ ) │ server-1 | │ 1208 │ │ self.write_frame_syn │ server-1 | │ ❱ 1209 │ │ await self.drain() │ server-1 | │ 1210 │ │ server-1 | │ 1211 │ async def write_close_fr │ server-1 | │ 1212 │ │ self, close: Close, │ server-1 | │ │ server-1 | │ /usr/local/lib/python3.10/dist-pack │ server-1 | │ ages/websockets/legacy/protocol.py: │ server-1 | │ 1198 in drain │ server-1 | │ │ server-1 | │ 1195 │ │ │ self.fail_connec │ server-1 | │ 1196 │ │ │ # Wait until the │ server-1 | │ 1197 │ │ │ # with the corre │ server-1 | │ ❱ 1198 │ │ │ await self.ensur │ server-1 | │ 1199 │ │ server-1 | │ 1200 │ async def write_frame( │ server-1 | │ 1201 │ │ self, fin: bool, opc │ server-1 | │ │ server-1 | │ /usr/local/lib/python3.10/dist-pack │ server-1 | │ ages/websockets/legacy/protocol.py: │ server-1 | │ 939 in ensure_open │ server-1 | │ │ server-1 | │ 936 │ │ │ │ return │ server-1 | │ 937 │ │ │ server-1 | │ 938 │ │ if self.state is Sta │ server-1 | │ ❱ 939 │ │ │ raise self.conne │ server-1 | │ 940 │ │ │ server-1 | │ 941 │ │ if self.state is Sta │ server-1 | │ 942 │ │ │ # If we started │ server-1 | ╰─────────────────────────────────────╯ server-1 | ConnectionClosedError: sent 1000 (OK); server-1 | no close frame received server-1 | [15:31:04.365601] INFO uvicorn.error: connection closed server.py:264 server-1 | [15:31:04.366237] DEBUG uvicorn.error: x half-closing TCP protocol.py:1319 server-1 | connection database-1 | 2024-05-30 15:32:28.902 UTC [27] LOG: checkpoint starting: time database-1 | 2024-05-30 15:32:30.232 UTC [27] LOG: checkpoint complete: wrote 16 buffers (0.1%); 0 WAL file(s) added, 0 removed, 0 recycled; write=1.316 s, sync=0.006 s, total=1.331 s; sync files=15, longest=0.003 s, average=0.001 s; distance=18 kB, estimate=18 kB server-1 | [15:32:43.029176] INFO khoj.configure: 📡 Uploading configure.py:345 server-1 | telemetry to server-1 | https://khoj.beta.haletic.com/v1/tel server-1 | emetry... server-1 | [15:32:43.030050] DEBUG khoj.configure: Telemetry state: configure.py:346 server-1 | [{'telemetry_type': 'api', server-1 | 'server_version': '1.12.1', 'os': server-1 | 'Linux', 'timestamp': '2024-05-30 server-1 | 15:28:15', 'client_host': server-1 | '192.168.65.1', 'user_agent': server-1 | 'Mozilla/5.0 (Windows NT 10.0; server-1 | Win64; x64) AppleWebKit/537.36 server-1 | (KHTML, like Gecko) Chrome/125.0.0.0 server-1 | Safari/537.36', 'referer': server-1 | 'http://localhost:42110/', 'host': server-1 | 'localhost:42110', 'server_id': server-1 | '9fe7a503-fa45-40bd-9243-a43b4ab2006 server-1 | 4', 'subscription_type': 'standard', server-1 | 'is_recurring': False, 'client_id': server-1 | 'default', 'api': 'chat_options'}, server-1 | {'telemetry_type': 'api', server-1 | 'server_version': '1.12.1', 'os': server-1 | 'Linux', 'timestamp': '2024-05-30 server-1 | 15:28:15', 'client_host': server-1 | '192.168.65.1', 'user_agent': server-1 | 'Mozilla/5.0 (Windows NT 10.0; server-1 | Win64; x64) AppleWebKit/537.36 server-1 | (KHTML, like Gecko) Chrome/125.0.0.0 server-1 | Safari/537.36', 'referer': server-1 | 'http://localhost:42110/', 'host': server-1 | 'localhost:42110', 'server_id': server-1 | '9fe7a503-fa45-40bd-9243-a43b4ab2006 server-1 | 4', 'subscription_type': 'standard', server-1 | 'is_recurring': False, 'client_id': server-1 | 'default', 'api': 'chat_sessions'}, server-1 | {'telemetry_type': 'api', server-1 | 'server_version': '1.12.1', 'os': server-1 | 'Linux', 'timestamp': '2024-05-30 server-1 | 15:28:15', 'client_host': server-1 | '192.168.65.1', 'user_agent': server-1 | 'Mozilla/5.0 (Windows NT 10.0; server-1 | Win64; x64) AppleWebKit/537.36 server-1 | (KHTML, like Gecko) Chrome/125.0.0.0 server-1 | Safari/537.36', 'referer': server-1 | 'http://localhost:42110/', 'host': server-1 | 'localhost:42110', 'server_id': server-1 | '9fe7a503-fa45-40bd-9243-a43b4ab2006 server-1 | 4', 'subscription_type': 'standard', server-1 | 'is_recurring': False, 'client_id': server-1 | 'default', 'api': 'chat_history', server-1 | 'client': 'web'}, {'telemetry_type': server-1 | 'api', 'server_version': '1.12.1', server-1 | 'os': 'Linux', 'timestamp': server-1 | '2024-05-30 15:30:51', server-1 | 'client_host': '192.168.65.1', server-1 | 'user_agent': 'unknown', 'referer': server-1 | 'unknown', 'host': 'unknown', server-1 | 'server_id': server-1 | '9fe7a503-fa45-40bd-9243-a43b4ab2006 server-1 | 4', 'subscription_type': 'standard', server-1 | 'is_recurring': False, 'client_id': server-1 | 'default', 'api': server-1 | 'get_all_filenames'}, server-1 | {'telemetry_type': 'api', server-1 | 'server_version': '1.12.1', 'os': server-1 | 'Linux', 'timestamp': '2024-05-30 server-1 | 15:30:51', 'client_host': server-1 | '192.168.65.1', 'user_agent': server-1 | 'unknown', 'referer': 'unknown', server-1 | 'host': 'unknown', 'server_id': server-1 | '9fe7a503-fa45-40bd-9243-a43b4ab2006 server-1 | 4', 'subscription_type': 'standard', server-1 | 'is_recurring': False, 'client_id': server-1 | 'default', 'api': server-1 | 'get_all_filenames'}, server-1 | {'telemetry_type': 'api', server-1 | 'server_version': '1.12.1', 'os': server-1 | 'Linux', 'timestamp': '2024-05-30 server-1 | 15:30:51', 'client_host': server-1 | '192.168.65.1', 'user_agent': server-1 | 'unknown', 'referer': 'unknown', server-1 | 'host': 'unknown', 'server_id': server-1 | '9fe7a503-fa45-40bd-9243-a43b4ab2006 server-1 | 4', 'subscription_type': 'standard', server-1 | 'is_recurring': False, 'client_id': server-1 | 'default', 'api': server-1 | 'get_all_filenames'}, server-1 | {'telemetry_type': 'api', server-1 | 'server_version': '1.12.1', 'os': server-1 | 'Linux', 'timestamp': '2024-05-30 server-1 | 15:30:51', 'client_host': server-1 | '192.168.65.1', 'user_agent': server-1 | 'Mozilla/5.0 (Windows NT 10.0; server-1 | Win64; x64) AppleWebKit/537.36 server-1 | (KHTML, like Gecko) Chrome/125.0.0.0 server-1 | Safari/537.36', 'referer': server-1 | 'http://localhost:42110/config', server-1 | 'host': 'localhost:42110', server-1 | 'server_id': server-1 | '9fe7a503-fa45-40bd-9243-a43b4ab2006 server-1 | 4', 'subscription_type': 'standard', server-1 | 'is_recurring': False, 'client_id': server-1 | 'default', 'api': 'update', server-1 | 'client': 'web'}, {'telemetry_type': server-1 | 'api', 'server_version': '1.12.1', server-1 | 'os': 'Linux', 'timestamp': server-1 | '2024-05-30 15:30:53', server-1 | 'client_host': '192.168.65.1', server-1 | 'user_agent': 'Mozilla/5.0 (Windows server-1 | NT 10.0; Win64; x64) server-1 | AppleWebKit/537.36 (KHTML, like server-1 | Gecko) Chrome/125.0.0.0 server-1 | Safari/537.36', 'referer': server-1 | 'http://localhost:42110/chat', server-1 | 'host': 'localhost:42110', server-1 | 'server_id': server-1 | '9fe7a503-fa45-40bd-9243-a43b4ab2006 server-1 | 4', 'subscription_type': 'standard', server-1 | 'is_recurring': False, 'client_id': server-1 | 'default', 'api': 'chat_options'}, server-1 | {'telemetry_type': 'api', server-1 | 'server_version': '1.12.1', 'os': server-1 | 'Linux', 'timestamp': '2024-05-30 server-1 | 15:30:53', 'client_host': server-1 | '192.168.65.1', 'user_agent': server-1 | 'Mozilla/5.0 (Windows NT 10.0; server-1 | Win64; x64) AppleWebKit/537.36 server-1 | (KHTML, like Gecko) Chrome/125.0.0.0 server-1 | Safari/537.36', 'referer': server-1 | 'http://localhost:42110/chat', server-1 | 'host': 'localhost:42110', server-1 | 'server_id': server-1 | '9fe7a503-fa45-40bd-9243-a43b4ab2006 server-1 | 4', 'subscription_type': 'standard', server-1 | 'is_recurring': False, 'client_id': server-1 | 'default', 'api': 'chat_sessions'}, server-1 | {'telemetry_type': 'api', server-1 | 'server_version': '1.12.1', 'os': server-1 | 'Linux', 'timestamp': '2024-05-30 server-1 | 15:30:53', 'client_host': server-1 | '192.168.65.1', 'user_agent': server-1 | 'Mozilla/5.0 (Windows NT 10.0; server-1 | Win64; x64) AppleWebKit/537.36 server-1 | (KHTML, like Gecko) Chrome/125.0.0.0 server-1 | Safari/537.36', 'referer': server-1 | 'http://localhost:42110/chat', server-1 | 'host': 'localhost:42110', server-1 | 'server_id': server-1 | '9fe7a503-fa45-40bd-9243-a43b4ab2006 server-1 | 4', 'subscription_type': 'standard', server-1 | 'is_recurring': False, 'client_id': server-1 | 'default', 'api': 'chat_history', server-1 | 'client': 'web'}]

leoleelxh commented 5 months ago

image

jppaolim commented 5 months ago

Same here... same with ollama ... nothing works :(

userdehghani commented 5 months ago

same here

debanjum commented 5 months ago

That's unfortunate! The stacktrace seems incomplete/insufficient to root-cause the issue. I'd need more info to debug.

It'd be great if you could share information about how you installed Khoj, which OS and which chat model you're using.

jppaolim commented 5 months ago

Right. On my end it's Docker installation, downloaded the compose, build the thing and then downloaded the latest client, ARM version, so running Mac M1 Max, on Sonoma 14.5 It's doesn't connect to any openAI local endpoint, nor ollama, didn't try the remote version.

sabaimran commented 5 months ago

Was there more to the stack trace? Usually, when this error comes up, there's another error that causes it.

leoleelxh commented 5 months ago

That's unfortunate! The stacktrace seems incomplete/insufficient to root-cause the issue. I'd need more info to debug.

It'd be great if you could share information about how you installed Khoj, which OS and which chat model you're using.

My configuration environment is: win11+wsl2+docker. According to the installation documentation, install and start, regardless of whether the Ollama or OpenAI model is configured, this issue will be prompted.

jppaolim commented 5 months ago

Opened #796

sabaimran commented 5 months ago

Can you make sure you're not using the offlinechat mode in your default (first) chat model + docker? This generally won't work because of RAM constraints. Ideally, I'd need more info about your openai processor conversation if you're using ollama, and any stack traces.

stark1tty commented 5 months ago

Also having this issue

sabaimran commented 1 month ago

Hey all, closing this out for now as it hasn't been updated. If you are still having this issue, it would be great to provide your OS, how you're running your server (source, python package, docker), stack traces, and any chat model configuration details.

We've also rewritten much of the code for the chat flow since then, so it's quite likely a lot of these issues are addressed.

debanjum commented 1 month ago

My configuration environment is: win11+wsl2+docker. According to the installation documentation, install and start, regardless of whether the Ollama or OpenAI model is configured, this issue will be prompted.

I've also tested Khoj on Windows 11 with WSL2 and Docker (using WSL2 backend). It should work after the fixes in #919 is merged