mit-han-lab / TinyChatEngine

TinyChatEngine: On-Device LLM Inference Library
https://mit-han-lab.github.io/TinyChatEngine/
MIT License
751 stars 73 forks source link

Windows CUDA Make chat problem #92

Open M0rtale opened 9 months ago

M0rtale commented 9 months ago

I am trying to use this solution on windows with CUDA with capability 8.6. I am running into an issue relating to the function LLaVAGenerate not being compiled during linking, as shown in the screenshot below image

Steps to replicate:

Environment:

Fixed a few issues with NUM_THREAD not being defined and tanhf not being defined, built using command make chat -j

my guess the only reference to LLaVAGenerate is in the non_cuda directory, maybe it is being omitted from compiled? Note compiling with CPU flag works fine and I can get output from the LLM

image