mlc-ai / mlc-llm

Universal LLM Deployment Engine with ML Compilation
https://llm.mlc.ai/
Apache License 2.0
19.08k stars 1.56k forks source link

[Bug] raise ValueError("Cannot detect local CUDA GPU target!") ValueError: Cannot detect local CUDA GPU target! #778

Closed gaonkar-dinesh closed 1 year ago

gaonkar-dinesh commented 1 year ago

πŸ› Bug

To Reproduce

Steps to reproduce the behavior:

1.python3 -m mlc_llm.build --model "path to custom lora llama model" --target cuda --quantization q4f16_1

Expected behavior

Environment

Additional context

import tvm dev = tvm.cuda() dev cuda(0) print(dev.exist) False

gaonkar-dinesh commented 1 year ago

it worked, compiled mlc-ai with cuda 11.6

ys-2020 commented 1 year ago

Hi @gaonkar-dinesh , I met the same error. Could you please tell me how did you solve the problem with more details. Thank you!

gaonkar-dinesh commented 1 year ago

Hi @gaonkar-dinesh , I met the same error. Could you please tell me how did you solve the problem with more details. Thank you! @ys-2020 have you compiled mlc with cuda support?

gaonkar-dinesh commented 1 year ago

@ys-2020 For me recompiling MLC with appropriate cuda version solved the issue