qnguyen3 / chat-with-mlx

An all-in-one LLMs Chat UI for Apple Silicon Mac using MLX Framework.
https://twitter.com/stablequan
MIT License
1.45k stars 131 forks source link

show error #62

Open liuyf90 opened 5 months ago

liuyf90 commented 5 months ago

I am trying to install a package using pip on my M2 MacBook Air, but after entering the command and clicking submit, the following error is reported in the background:

❯ chat-with-mlx -h You try to use a model that was created with version 2.4.0.dev0, however, your version is 2.4.0. This might cause unexpected behavior or errors. In that case, try to update to the latest version.

Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. Traceback (most recent call last): File "/opt/homebrew/lib/python3.11/site-packages/gradio/queueing.py", line 495, in call_prediction output = await route_utils.call_process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/gradio/route_utils.py", line 235, in call_process_api output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/gradio/blocks.py", line 1627, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/gradio/blocks.py", line 1185, in call_function prediction = await utils.async_iteration(iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/gradio/utils.py", line 514, in async_iteration return await iterator.__anext__() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/gradio/utils.py", line 640, in asyncgen_wrapper response = await iterator.__anext__() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/gradio/chat_interface.py", line 490, in _stream_fn first_response = await async_iteration(generator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/gradio/utils.py", line 514, in async_iteration return await iterator.__anext__() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/gradio/utils.py", line 507, in __anext__ return await anyio.to_thread.run_sync( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 851, in run result = context.run(func, *args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/gradio/utils.py", line 490, in run_sync_iterator_async return next(iterator) ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/chat_with_mlx/app.py", line 166, in chatbot response = client.chat.completions.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 663, in create return self._post( ^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/openai/_base_client.py", line 1200, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/openai/_base_client.py", line 889, in request return self._request( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/openai/_base_client.py", line 965, in _request return self._retry_request( ^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _retry_request return self._request( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/openai/_base_client.py", line 965, in _request return self._retry_request( ^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _retry_request return self._request( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/openai/_base_client.py", line 980, in _request raise self._make_status_error_from_response(err.response) from None openai.InternalServerError: Error code: 503
makevin23 commented 5 months ago

Try to replace EMPTY with your openai api key in line 41 in app.py.

liuyf90 commented 5 months ago

Try to replace EMPTY with your openai api key in line 41 in app.py.

image it shows this error, now.

qnguyen3 commented 5 months ago

Hi @liuyf90, if you are chatting with a file, you need to specify whether you are chatting with a PDF or YouTube video, secondly, make sure your model is loaded.

WhiteNotWhite commented 5 months ago

Hi @liuyf90, if you are chatting with a file, you need to specify whether you are chatting with a PDF or YouTube video, secondly, make sure your model is loaded.

Try to replace EMPTY with your openai api key in line 41 in app.py.

Hi @qnguyen3 ,Why need to set the openai API key for local deployment