Running mlc_chat_cli --local-id vicuna-13b-1.1-q3f16_0 fails with
Use MLC config: "/Users/peter/_Git/_GPT/mlc-llm/dist/vicuna-13b-1.1-q3f16_0/params/mlc-chat-config.json"
Use model weights: "/Users/peter/_Git/_GPT/mlc-llm/dist/vicuna-13b-1.1-q3f16_0/params/ndarray-cache.json"
Use model library: "/Users/peter/_Git/_GPT/mlc-llm/dist/vicuna-13b-1.1-q3f16_0/vicuna-13b-1.1-q3f16_0-metal.so"
You can use the following special commands:
/help print the special commands
/exit quit the cli
/stats print out the latest stats (token/sec)
/reset restart a fresh chat
/reload [local_id] reload model `local_id` from disk, or reload the current model if `local_id` is not specified
Loading model...
[22:01:12] /Users/catalyst/Workspace/mlc-chat-conda-build/tvm/src/runtime/metal/metal_device_api.mm:165: Intializing Metal device 0, name=Apple M2 Max
Loading finished
Running system prompts...
libc++abi: terminating due to uncaught exception of type tvm::runtime::InternalError: [22:01:27] /Users/catalyst/Workspace/mlc-chat-conda-build/tvm/src/runtime/metal/metal_device_api.mm:308: Error! Some problems on GPU happaned!
Stack trace:
[bt] (0) 1 libtvm_runtime.dylib 0x0000000102722db4 tvm::runtime::detail::LogFatal::Entry::Finalize() + 68
[bt] (1) 2 libtvm_runtime.dylib 0x0000000102722d70 tvm::runtime::detail::LogFatal::Entry::Finalize() + 0
[bt] (2) 3 libtvm_runtime.dylib 0x000000010271d684 __clang_call_terminate + 0
[bt] (3) 4 libtvm_runtime.dylib 0x000000010281e9ac tvm::runtime::metal::MetalWorkspace::StreamSync(DLDevice, void*) + 264
[bt] (4) 5 libtvm_runtime.dylib 0x000000010281de34 tvm::runtime::metal::MetalWorkspace::FreeDataSpace(DLDevice, void*) + 52
[bt] (5) 6 libtvm_runtime.dylib 0x000000010276ad50 tvm::runtime::NDArray::Internal::DefaultDeleter(tvm::runtime::Object*) + 100
[bt] (6) 7 libmlc_llm.dylib 0x0000000102e131a8 tvm::runtime::SimpleObjAllocator::ArrayHandler<tvm::runtime::ArrayNode, tvm::runtime::ObjectRef>::Deleter_(tvm::runtime::Object*) + 96
[bt] (7) 8 libtvm_runtime.dylib 0x00000001027247f0 tvm::runtime::TVMRetValue::Clear() + 172
[bt] (8) 9 libtvm_runtime.dylib 0x00000001027dd054 std::__1::unique_ptr<tvm::runtime::relax_vm::VMFrame, std::__1::default_delete<tvm::runtime::relax_vm::VMFrame>>::~unique_ptr() + 96
[1] 7751 abort mlc_chat_cli --local-id vicuna-13b-1.1-q3f16_0
Run mlc_chat_cli --local-id vicuna-13b-1.1-q3f16_0
Expected behavior
I expect the model to work in the same way as vicuna-7b-1.1 (which runs ok).
Environment
Platform: Apple M2 Max, Metal
Operating system: MacOs Venturan 13.4
Device: Macbook Pro 16 M2 Max 32GB
How you installed MLC-LLM (conda, source) : git clone --recursive https://github.com/mlc-ai/mlc-llm.git
How you installed TVM-Unity (pip, source): pip install -I mlc_ai_nightly -f https://mlc.ai/wheels
Python version: 3.11
GPU driver version (if applicable):
CUDA/cuDNN version (if applicable):
TVM Unity Hash Tag (python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))", applicable if you compile models):
USE_GTEST: AUTO
SUMMARIZE: OFF
USE_IOS_RPC: OFF
CUDA_VERSION: NOT-FOUND
USE_LIBBACKTRACE: AUTO
DLPACK_PATH: 3rdparty/dlpack/include
USE_TENSORRT_CODEGEN: OFF
USE_THRUST: OFF
USE_TARGET_ONNX: OFF
USE_AOT_EXECUTOR: ON
BUILD_DUMMY_LIBTVM: OFF
USE_CUDNN: OFF
USE_TENSORRT_RUNTIME: OFF
USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF
USE_CCACHE: AUTO
USE_ARM_COMPUTE_LIB: OFF
USE_CPP_RTVM:
USE_OPENCL_GTEST: /path/to/opencl/gtest
USE_MKL: OFF
USE_PT_TVMDSOOP: OFF
USE_CLML: OFF
USE_STACKVM_RUNTIME: OFF
USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF
ROCM_PATH: /opt/rocm
USE_DNNL: OFF
USE_VITIS_AI: OFF
USE_LLVM: llvm-config --link-static
USE_VERILATOR: OFF
USE_TF_TVMDSOOP: OFF
USE_THREADS: ON
USE_MSVC_MT: OFF
BACKTRACE_ON_SEGFAULT: OFF
USE_GRAPH_EXECUTOR: ON
USE_ROCBLAS: OFF
GIT_COMMIT_HASH: b67c0eaa74919de719fc6a6c2ae774c0cf403d20
USE_VULKAN: OFF
USE_RUST_EXT: OFF
USE_CUTLASS: OFF
USE_CPP_RPC: OFF
USE_HEXAGON: OFF
USE_CUSTOM_LOGGING: OFF
USE_UMA: OFF
USE_FALLBACK_STL_MAP: OFF
USE_SORT: ON
USE_RTTI: ON
GIT_COMMIT_TIME: 2023-06-03 00:14:52 -0700
USE_HEXAGON_SDK: /path/to/sdk
USE_BLAS: none
USE_ETHOSN: OFF
USE_LIBTORCH: OFF
USE_RANDOM: ON
USE_CUDA: OFF
USE_COREML: OFF
USE_AMX: OFF
BUILD_STATIC_RUNTIME: OFF
USE_CMSISNN: ON
USE_KHRONOS_SPIRV: OFF
USE_CLML_GRAPH_EXECUTOR: OFF
USE_TFLITE: OFF
USE_HEXAGON_GTEST: /path/to/hexagon/gtest
PICOJSON_PATH: 3rdparty/picojson
USE_OPENCL_ENABLE_HOST_PTR: OFF
INSTALL_DEV: OFF
USE_PROFILER: ON
USE_NNPACK: OFF
LLVM_VERSION: 15.0.7
USE_OPENCL: OFF
COMPILER_RT_PATH: 3rdparty/compiler-rt
RANG_PATH: 3rdparty/rang/include
USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF
USE_OPENMP: OFF
USE_BNNS: OFF
USE_CUBLAS: OFF
USE_METAL: ON
USE_MICRO_STANDALONE_RUNTIME: ON
USE_HEXAGON_EXTERNAL_LIBS: OFF
USE_ALTERNATIVE_LINKER: AUTO
USE_BYODT_POSIT: OFF
USE_HEXAGON_RPC: OFF
USE_MICRO: ON
DMLC_PATH: 3rdparty/dmlc-core/include
INDEX_DEFAULT_I64: ON
USE_RELAY_DEBUG: OFF
USE_RPC: ON
USE_TENSORFLOW_PATH: none
TVM_CLML_VERSION:
USE_MIOPEN: OFF
USE_ROCM: OFF
USE_PAPI: OFF
USE_CURAND: OFF
TVM_CXX_COMPILER_PATH: /Library/Developer/CommandLineTools/usr/bin/c++
HIDE_PRIVATE_SYMBOLS: ON
🐛 Bug
Running
mlc_chat_cli --local-id vicuna-13b-1.1-q3f16_0
fails withTo Reproduce
Steps to reproduce the behavior:
conda install -c mlc-ai -c conda-forge mlc-chat-nightly --force-reinstall
mlc_chat_cli --local-id vicuna-13b-1.1-q3f16_0
Expected behavior
I expect the model to work in the same way as vicuna-7b-1.1 (which runs ok).
Environment
conda
, source) :git clone --recursive https://github.com/mlc-ai/mlc-llm.git
pip
, source):pip install -I mlc_ai_nightly -f https://mlc.ai/wheels
3.11
python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))"
, applicable if you compile models):Additional context
vicuna-7b-1.1 runs just fine.