mit-han-lab / TinyChatEngine

TinyChatEngine: On-Device LLM Inference Library
https://mit-han-lab.github.io/TinyChatEngine/
MIT License
623 stars 58 forks source link

Error while running make chat. #83

Open s-swathib opened 6 months ago

s-swathib commented 6 months ago

Hi @RaymondWang0 , I'm trying to implement this solution on windows(cpu) OS. And the pre-requisites have been met. Model used is LLaMA2_7B_chat_awq_int4 --QM QM_x86 We are getting the same error for CUDA model too. I'm getting error with 'make chat -j' as cuda is not available. Below I've attached the screenshot of error. Please provide the solution for this error. Thanks, Swathi S error_vm_make_1 error_vm_make

Littledragon-wxl commented 2 months ago

Hello, I have encountered the same problem as you. Have you finally resolved it?