mlc-ai / mlc-llm

Universal LLM Deployment Engine with ML Compilation
https://llm.mlc.ai/
Apache License 2.0
19.09k stars 1.57k forks source link

[Question] can't download model #2571

Closed panghongtao closed 4 months ago

panghongtao commented 4 months ago

❓ General Questions

i run mlc_llm package . mlc-package-config.json: { "device": "android", "model_list": [

    {
        "model": "HF://mlc-ai/Qwen-7B-Chat-q4f16_1-MLC",
        "model_id": "Qwen-7B-Chat-q4f16_1-MLC",
        "estimated_vram_bytes": 3000000004
    }
]

}

at hugging face have this model.https://hf-mirror.com/mlc-ai/Qwen-7B-Chat-q4f16_1-MLC

i try use other model.but all same error.

i look this. but i no have D:\Users\W9057210.ADC\AppData\Local\Temp\tmpc399rnv1\tmp is folder

[2024-06-12 18:48:57] INFO download_cache.py:56: [Git] Cloning https://huggingface.co/mlc-ai/Qwen-7B-Chat-q4f16_1-MLC.git to D:\Users\W9057210.ADC\AppData\Local\Temp\tmpc399rnv1\tmp

(mlc-panghongtao) E:\project\mlc-llm\android\MLCChat>mlc_llm package [2024-06-12 18:48:57] INFO package.py:327: MLC LLM HOME: "E:\project\mlc-llm" [2024-06-12 18:48:57] INFO package.py:28: Clean up all directories under "dist\bundle" [2024-06-12 18:48:57] INFO jit.py:43: MLC_JIT_POLICY = ON. Can be one of: ON, OFF, REDO, READONLY [2024-06-12 18:48:57] INFO download_cache.py:227: Downloading model from HuggingFace: HF://mlc-ai/Qwen-7B-Chat-q4f16_1-MLC [2024-06-12 18:48:57] INFO download_cache.py:29: MLC_DOWNLOAD_CACHE_POLICY = ON. Can be one of: ON, OFF, REDO, READONLY [2024-06-12 18:48:57] INFO download_cache.py:56: [Git] Cloning https://huggingface.co/mlc-ai/Qwen-7B-Chat-q4f16_1-MLC.git to D:\Users\W9057210.ADC\AppData\Local\Temp\tmpc399rnv1\tmp Traceback (most recent call last): File "C:\ProgramData\anaconda3\envs\mlc-panghongtao\Lib\site-packages\mlc_llm\support\download_cache.py", line 57, in git_clone subprocess.run( File "C:\ProgramData\anaconda3\envs\mlc-panghongtao\Lib\subprocess.py", line 571, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['git', 'clone', 'https://huggingface.co/mlc-ai/Qwen-7B-Chat-q4f16_1-MLC.git', '.tmp']' returned non-zero exit status 128.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "C:\ProgramData\anaconda3\envs\mlc-panghongtao\Scripts\mlc_llm.exe__main__.py", line 7, in File "C:\ProgramData\anaconda3\envs\mlc-panghongtao\Lib\site-packages\mlc_llm__main__.py", line 53, in main cli.main(sys.argv[2:]) File "C:\ProgramData\anaconda3\envs\mlc-panghongtao\Lib\site-packages\mlc_llm\cli\package.py", line 64, in main package( File "C:\ProgramData\anaconda3\envs\mlc-panghongtao\Lib\site-packages\mlc_llm\interface\package.py", line 351, in package model_lib_path_for_prepare_libs = build_model_library( ^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\mlc-panghongtao\Lib\site-packages\mlc_llm\interface\package.py", line 73, in build_model_library model_path = download_cache.get_or_download_model(model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\mlc-panghongtao\Lib\site-packages\mlc_llm\support\download_cache.py", line 228, in get_or_download_model model_path = download_and_cache_mlc_weights(model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\mlc-panghongtao\Lib\site-packages\mlc_llm\support\download_cache.py", line 180, in download_and_cache_mlc_weights git_clone(git_url, tmp_dir, ignore_lfs=True) File "C:\ProgramData\anaconda3\envs\mlc-panghongtao\Lib\site-packages\mlc_llm\support\download_cache.py", line 70, in git_clone raise ValueError( ValueError: Git clone failed with return code 128: None. The command was: ['git', 'clone', 'https://huggingface.co/mlc-ai/Qwen-7B-Chat-q4f16_1-MLC.git', '.tmp']

tqchen commented 4 months ago

The tmp folder will be deleted after download, in your case seems there is some issue with download, you can also try to manually download(by running the command) to a local path and pass in local path

['git', 'clone', 'https://huggingface.co/mlc-ai/Qwen-7B-Chat-q4f16_1-MLC.git', '.tmp']'

note that the default command may not pick up mirros

panghongtao commented 4 months ago

The tmp folder will be deleted after download, in your case seems there is some issue with download, you can also try to manually download(by running the command) to a local path and pass in local path

['git', 'clone', 'https://huggingface.co/mlc-ai/Qwen-7B-Chat-q4f16_1-MLC.git', '.tmp']'

note that the default command may not pick up mirros

Why do I download the model when I set this configuration? This action is to put the configuration information into the APP. Select this model in my phone. The download model is operated on the phone. So if you download the model on your phone. Do mlc_llm packages still need to be downloaded?

panghongtao commented 4 months ago

The tmp folder will be deleted after download, in your case seems there is some issue with download, you can also try to manually download(by running the command) to a local path and pass in local path

['git', 'clone', 'https://huggingface.co/mlc-ai/Qwen-7B-Chat-q4f16_1-MLC.git', '.tmp']'

note that the default command may not pick up mirros

I have download the https://huggingface.co/mlc-ai/Qwen-7B-Chat-q4f16_1-MLC.git to my local directory. What do I need to do then? Change the configuration file? Or execute a command?

tqchen commented 4 months ago

you can change the model field that originally contains HF:// to point to your local path

panghongtao commented 4 months ago

you can change the model field that originally contains HF:// to point to your local path

Ok. I've started using local. Thank you very much

tqchen commented 4 months ago

glad that local works, closing this for now

tqchen commented 4 months ago

@panghongtao if you have suggestions to the documents (e.g. enhancing some docs to include reading from local), please feel free to send a PR,

panghongtao commented 4 months ago

@panghongtao if you have suggestions to the documents (e.g. enhancing some docs to include reading from local), please feel free to send a PR, Well. I will send some suggestions for documentation. Because the company's model doesn't allow hugging face to be uploaded. So I'm using the model transformed path. You have given a speech in our company, my colleague told me. Thank you very much for responding to my question