mlx-chat / mlx-chat-app

Chat with MLX is a high-performance macOS application that connects your local documents to a personalized large language model (LLM).
MIT License
161 stars 9 forks source link

Errors on first run after download is complete #34

Open jmather opened 7 months ago

jmather commented 7 months ago

Here's the log output:

[1] Server error: 
[1] l-00002-of-00003.safetensors: 100%|██████████| 5.00G/5.00G [07:48<00:00, 16.7MB/s]
[1] Server error: 03.safetensors:  99%|█████████▉| 4.92G/4.94G [07:49<00:01, 17.7MB/s]
[1] 
[1] Server error: 03.safetensors: 100%|█████████▉| 4.93G/4.94G [07:49<00:00, 19.6MB/s]
[1] 
[1] Server error: 03.safetensors: 100%|█████████▉| 4.94G/4.94G [07:50<00:00, 22.5MB/s]
[1] 
model-00001-of-00003.safetensors: 100%|██████████| 4.94G/4.94G [07:50<00:00, 10.5MB/s]
[1] 
Fetching 11 files:  27%|██▋       | 3/11 [07:51<23:15, 174.49s/it]
Fetching 11 files: 100%|██████████| 11/11 [07:51<00:00, 42.85s/it]
[1] 
[1] Server output: /Users/someguy/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.2/snapshots/cf47bb3e18fe41a5351bc36eef76e9c900847c89
[1] 
[1] Server output: [INFO] Quantizing
[1] 
[1] Server output: [INFO] Saving to /Users/someguy/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.2-mlx-q
[1] 
[1] Server error: 127.0.0.1 - - [07/Mar/2024 12:48:50] "POST /api/init HTTP/1.1" 200 -
[1] 127.0.0.1 - - [07/Mar/2024 12:48:50] "POST /api/init HTTP/1.1" 500 -
[1] 
[1] Server output: Error: [Errno 32] Broken pipe
[1] 
[1] Server error: ----------------------------------------
[1] Exception occurred during processing of request from ('127.0.0.1', 53133)
[1] 
[1] Server error: Traceback (most recent call last):
[1] 
[1] Server error:   File "/Users/someguy/code/ai/mlx-chat-app/server/server.py", line 180, in do_POST
[1]     self._set_headers(200)
[1]   File "/Users/someguy/code/ai/mlx-chat-app/server/server.py", line 127, in _set_headers
[1]     self.end_headers()
[1]   File "/Users/someguy/miniconda3/lib/python3.11/http/server.py", line 538, in end_headers
[1]     self.flush_headers()
[1]   File "/Users/someguy/miniconda3/lib/python3.11/http/server.py", line 542, in flush_headers
[1]     self.wfile.write(b"".join(self._headers_buffer))
[1]   File "/Users/someguy/miniconda3/lib/python3.11/socketserver.py", line 834, in write
[1]     self._sock.sendall(b)
[1] BrokenPipeError: [Errno 32] Broken pipe
[1] 
[1] During handling of the above exception, another exception occurred:
[1] 
[1] 
[1] Server error: Traceback (most recent call last):
[1] 
[1] Server error:   File "/Users/someguy/miniconda3/lib/python3.11/socketserver.py", line 317, in _handle_request_noblock
[1]     self.process_request(request, client_address)
[1]   File "/Users/someguy/miniconda3/lib/python3.11/socketserver.py", line 348, in process_request
[1]     self.finish_request(request, client_address)
[1]   File "/Users/someguy/miniconda3/lib/python3.11/socketserver.py", line 361, in finish_request
[1]     self.RequestHandlerClass(request, client_address, self)
[1] 
[1] Server error:   File "/Users/someguy/miniconda3/lib/python3.11/socketserver.py", line 755, in __init__
[1]     self.handle()
[1]   File "/Users/someguy/miniconda3/lib/python3.11/http/server.py", line 436, in handle
[1]     self.handle_one_request()
[1]   File "/Users/someguy/miniconda3/lib/python3.11/http/server.py", line 424, in handle_one_request
[1]     method()
[1]   File "/Users/someguy/code/ai/mlx-chat-app/server/server.py", line 185, in do_POST
[1]     self._set_headers(500)
[1]   File "/Users/someguy/code/ai/mlx-chat-app/server/server.py", line 127, in _set_headers
[1]     self.end_headers()
[1]   File "/Users/someguy/miniconda3/lib/python3.11/http/server.py", line 538, in end_headers
[1]     self.flush_headers()
[1]   File "/Users/someguy/miniconda3/lib/python3.11/http/server.py", line 542, in flush_headers
[1]     self.wfile.write(b"".join(self._headers_buffer))
[1]   File "/Users/someguy/miniconda3/lib/python3.11/socketserver.py", line 834, in write
[1]     self._sock.sendall(b)
ParkerSm1th commented 6 months ago

Hey Jacob!

Ah yes. This is something we identified when making the build for the alpha release. @stockeh probably has the most context here and I think has a solution in mind but feel free to give any ideas you might have!

Thanks again for using MLX Chat!