Closed gaonkar-dinesh closed 1 year ago
it worked, compiled mlc-ai with cuda 11.6
Hi @gaonkar-dinesh , I met the same error. Could you please tell me how did you solve the problem with more details. Thank you!
Hi @gaonkar-dinesh , I met the same error. Could you please tell me how did you solve the problem with more details. Thank you! @ys-2020 have you compiled mlc with cuda support?
@ys-2020 For me recompiling MLC with appropriate cuda version solved the issue
π Bug
To Reproduce
Steps to reproduce the behavior:
1.python3 -m mlc_llm.build --model "path to custom lora llama model" --target cuda --quantization q4f16_1
Expected behavior
Environment
pip
):python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))"
,Additional context