qnguyen3 / chat-with-mlx

An all-in-one LLMs Chat UI for Apple Silicon Mac using MLX Framework.
https://twitter.com/stablequan
MIT License
1.46k stars 131 forks source link

openai.InternalServerError: Error code: 502 #43

Open yjc11 opened 6 months ago

yjc11 commented 6 months ago

i have alread loaded model image

Traceback (most recent call last): File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/queueing.py", line 495, in call_prediction output = await route_utils.call_process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/route_utils.py", line 235, in call_process_api output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/blocks.py", line 1627, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/blocks.py", line 1185, in call_function prediction = await utils.async_iteration(iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 514, in async_iteration return await iterator.anext() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 640, in asyncgen_wrapper response = await iterator.anext() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/chat_interface.py", line 490, in _stream_fn first_response = await async_iteration(generator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 514, in async_iteration return await iterator.anext() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 507, in anext return await anyio.to_thread.run_sync( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 851, in run result = context.run(func, args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 490, in run_sync_iterator_async return next(iterator) ^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/chat_with_mlx/app.py", line 166, in chatbot response = client.chat.completions.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper return func(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 663, in create return self._post( ^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 1200, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 889, in request return self._request( ^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 965, in _request return self._retry_request( ^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _retry_request return self._request( ^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 965, in _request return self._retry_request( ^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _retry_request return self._request( ^^^^^^^^^^^^^^ File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 980, in _request raise self._make_status_error_from_response(err.response) from None openai.InternalServerError: Error code: 502

qnguyen3 commented 6 months ago

Hi, have you tried other models?

RedemptionC commented 6 months ago

hum, that's confusing, I thought this is using local LLM, why would it use OpenAI's stuff?

nealsun commented 6 months ago

got same error

yjc11 commented 6 months ago

Very strange, I did not do anything, and it now works normally

qnguyen3 commented 6 months ago

Hi everyone, the code is updated, you can try the manual way of installation now, i have not done the release yet so you cant do pip install -U chat-with-mlx yet, but soon!

undefinedv commented 3 months ago

Hi guys! I got same error too, and figure it out that this issue happens while 8080 port is used by another program. I closed the other program and restart chat-with-mlx. It works well now.