qnguyen3 / chat-with-mlx

An all-in-one LLMs Chat UI for Apple Silicon Mac using MLX Framework.
https://twitter.com/stablequan
MIT License
1.49k stars 134 forks source link

Trouble loading a custom model #85

Closed ktsi closed 7 months ago

ktsi commented 7 months ago

Hi.

I wanna load a custom model.

Yaml configuration is:

_original_repo: ilsp-Meltemi-7B-Instruct-v1-4bit # The original HuggingFace Repo, this helps with displaying mlx-repo: mlx-community/ilsp-Meltemi-7B-Instruct-v1-4bit # The MLX models Repo, most are available through mlx-community quantize: 4bit # Optional: [4bit, 8bit] defaultlanguage: multi # Optional: [en, es, zh, vi, multi]

Error log:

_Starting MLX Chat on port 7860 Sharing: False Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Traceback (most recent call last): File "/Users/ktsi/Documents/Development/PyCode/chat-with-mlx/.venv/lib/python3.9/site-packages/gradio/queueing.py", line 501, in call_prediction output = await route_utils.call_process_api( File "/Users/ktsi/Documents/Development/PyCode/chat-with-mlx/.venv/lib/python3.9/site-packages/gradio/route_utils.py", line 258, in call_process_api output = await app.get_blocks().process_api( File "/Users/ktsi/Documents/Development/PyCode/chat-with-mlx/.venv/lib/python3.9/site-packages/gradio/blocks.py", line 1710, in process_api result = await self.call_function( File "/Users/ktsi/Documents/Development/PyCode/chat-with-mlx/.venv/lib/python3.9/site-packages/gradio/blocks.py", line 1250, in call_function prediction = await anyio.to_thread.run_sync( File "/Users/ktsi/Documents/Development/PyCode/chat-with-mlx/.venv/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/Users/ktsi/Documents/Development/PyCode/chat-with-mlx/.venv/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread return await future File "/Users/ktsi/Documents/Development/PyCode/chat-with-mlx/.venv/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 851, in run result = context.run(func, args) File "/Users/ktsi/Documents/Development/PyCode/chat-with-mlx/.venv/lib/python3.9/site-packages/gradio/utils.py", line 693, in wrapper response = f(args, **kwargs) File "/Users/ktsi/Documents/Development/PyCode/chat-with-mlx/chat_with_mlx/app.py", line 58, in load_model directory_path, "models", "download", model_namelist[1] IndexError: list index out of range

What am I doing wrong?

ktsi commented 7 months ago

Bump. Anyone?

qnguyen3 commented 7 months ago

Hi @ktsi, you would need to include the username original repo as well username/original_repo

ktsi commented 7 months ago

Thank you. We can close the issue now.