mit-han-lab / TinyChatEngine

TinyChatEngine: On-Device LLM Inference Library
https://mit-han-lab.github.io/TinyChatEngine/
MIT License
624 stars 58 forks source link

LLaMA2_7B_chat_awq_int4.zip Empty File #64

Closed tuobulatuo closed 9 months ago

tuobulatuo commented 9 months ago

Get 106KB of data when downloading model LLaMA2_7B_chat_awq_int4.

Output:

python tools/download_model.py --model LLaMA2_7B_chat_awq_int4 --QM QM_CUDA

Start downloading the model to ./LLaMA2_7B_chat_awq_int4.zip. 106KB [00:00, 1668.05KB/s]

File downloaded successfully: ./LLaMA2_7B_chat_awq_int4.zip The md5sum of the file does not match the expected md5sum. Expected: d0b1d11e498ac7d0a2e90348e946a7f5, got: 92ec5c6c6fc64012b42d4822a6327fde

Rkyzzy commented 9 months ago

Hi I met the same issue, it seems that the LLaMA2_7B_chat_awq_int4 QM_CUDA version model's files are missing, did you solve the problem? Or can the author @meenchen check the availability of this model, much thanks!

RaymondWang0 commented 9 months ago

Hi @tuobulatuo and @Rkyzzy, our apologies about the missing file. Something went wrong in our Dropbox. The problem should be fixed now, so I'll close this issue. Please try it again. Feel free to reopen it if you have any further questions. Thanks!