qnguyen3 / chat-with-mlx

An all-in-one LLMs Chat UI for Apple Silicon Mac using MLX Framework.
https://twitter.com/stablequan
MIT License
1.46k stars 131 forks source link

NameError: name 'sys_prompt' is not defined #104

Closed momofish closed 5 months ago

momofish commented 5 months ago

I got an error when lauch chat-with-mlx:

You try to use a model that was created with version 2.4.0.dev0, however, your version is 2.4.0. This might cause unexpected behavior or errors. In that case, try to update to the latest version.

Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. Traceback (most recent call last): File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/queueing.py", line 495, in call_prediction output = await route_utils.call_process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/route_utils.py", line 235, in call_process_api output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/blocks.py", line 1627, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/blocks.py", line 1185, in call_function prediction = await utils.async_iteration(iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/utils.py", line 514, in async_iteration return await iterator.__anext__() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/utils.py", line 640, in asyncgen_wrapper response = await iterator.__anext__() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/chat_interface.py", line 490, in _stream_fn first_response = await async_iteration(generator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/utils.py", line 514, in async_iteration return await iterator.__anext__() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/utils.py", line 507, in __anext__ return await anyio.to_thread.run_sync( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/liqiang/.venv/lib/python3.12/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/liqiang/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "/Users/liqiang/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 851, in run result = context.run(func, *args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/utils.py", line 490, in run_sync_iterator_async return next(iterator) ^^^^^^^^^^^^^^ File "/Users/liqiang/Projects/llm/chat-with-mlx/chat_with_mlx/app.py", line 151, in chatbot if sys_prompt is not None: ^^^^^^^^^^ NameError: name 'sys_prompt' is not defined. Did you mean: 'get_prompt'?
qnguyen3 commented 5 months ago

Hi, if you only want to chat with a model you dont need to hit start indexing. indexing is only when you need to chat with a file.

qnguyen3 commented 5 months ago

also, if you are chatting with a file, make sure choosing the data type, if not it will result in this error as well