mlc-ai / mlc-llm

Universal LLM Deployment Engine with ML Compilation
https://llm.mlc.ai/
Apache License 2.0
19.08k stars 1.56k forks source link

[Bug] Inference for "vicuna-13b-1.1-q3f16_0" fails with "Some problems on GPU happaned!" on M2 Max 32GB #337

Closed pgagarinov closed 1 year ago

pgagarinov commented 1 year ago

🐛 Bug

Running mlc_chat_cli --local-id vicuna-13b-1.1-q3f16_0 fails with

Use MLC config: "/Users/peter/_Git/_GPT/mlc-llm/dist/vicuna-13b-1.1-q3f16_0/params/mlc-chat-config.json"
Use model weights: "/Users/peter/_Git/_GPT/mlc-llm/dist/vicuna-13b-1.1-q3f16_0/params/ndarray-cache.json"
Use model library: "/Users/peter/_Git/_GPT/mlc-llm/dist/vicuna-13b-1.1-q3f16_0/vicuna-13b-1.1-q3f16_0-metal.so"
You can use the following special commands:
  /help               print the special commands
  /exit               quit the cli
  /stats              print out the latest stats (token/sec)
  /reset              restart a fresh chat
  /reload [local_id]  reload model `local_id` from disk, or reload the current model if `local_id` is not specified

Loading model...
[22:01:12] /Users/catalyst/Workspace/mlc-chat-conda-build/tvm/src/runtime/metal/metal_device_api.mm:165: Intializing Metal device 0, name=Apple M2 Max
Loading finished
Running system prompts...
libc++abi: terminating due to uncaught exception of type tvm::runtime::InternalError: [22:01:27] /Users/catalyst/Workspace/mlc-chat-conda-build/tvm/src/runtime/metal/metal_device_api.mm:308: Error! Some problems on GPU happaned!
Stack trace:
  [bt] (0) 1   libtvm_runtime.dylib                0x0000000102722db4 tvm::runtime::detail::LogFatal::Entry::Finalize() + 68
  [bt] (1) 2   libtvm_runtime.dylib                0x0000000102722d70 tvm::runtime::detail::LogFatal::Entry::Finalize() + 0
  [bt] (2) 3   libtvm_runtime.dylib                0x000000010271d684 __clang_call_terminate + 0
  [bt] (3) 4   libtvm_runtime.dylib                0x000000010281e9ac tvm::runtime::metal::MetalWorkspace::StreamSync(DLDevice, void*) + 264
  [bt] (4) 5   libtvm_runtime.dylib                0x000000010281de34 tvm::runtime::metal::MetalWorkspace::FreeDataSpace(DLDevice, void*) + 52
  [bt] (5) 6   libtvm_runtime.dylib                0x000000010276ad50 tvm::runtime::NDArray::Internal::DefaultDeleter(tvm::runtime::Object*) + 100
  [bt] (6) 7   libmlc_llm.dylib                    0x0000000102e131a8 tvm::runtime::SimpleObjAllocator::ArrayHandler<tvm::runtime::ArrayNode, tvm::runtime::ObjectRef>::Deleter_(tvm::runtime::Object*) + 96
  [bt] (7) 8   libtvm_runtime.dylib                0x00000001027247f0 tvm::runtime::TVMRetValue::Clear() + 172
  [bt] (8) 9   libtvm_runtime.dylib                0x00000001027dd054 std::__1::unique_ptr<tvm::runtime::relax_vm::VMFrame, std::__1::default_delete<tvm::runtime::relax_vm::VMFrame>>::~unique_ptr() + 96

[1]    7751 abort      mlc_chat_cli --local-id vicuna-13b-1.1-q3f16_0

To Reproduce

Steps to reproduce the behavior:

  1. Follow the steps from https://github.com/mlc-ai/mlc-llm#hugging-face-url for https://huggingface.co/eachadea/vicuna-13b-1.1
  2. Install mlc_chat_cli using conda install -c mlc-ai -c conda-forge mlc-chat-nightly --force-reinstall
  3. Run mlc_chat_cli --local-id vicuna-13b-1.1-q3f16_0

Expected behavior

I expect the model to work in the same way as vicuna-7b-1.1 (which runs ok).

Environment

Additional context

vicuna-7b-1.1 runs just fine.

junrushao commented 1 year ago

This is weird. I cannot reproduce the issue on my M2 Max. Does it work with the prebuilt Vicuna-7b?