Closed vlbosch closed 1 month ago
Apologize for the inconvenience. The latest nightly packages have fixed the issue. You may need to use environment variable MLC_JIT_POLICY=REDO python -m mlc_llm chat ...
to force the automatic model recompilation after upgrade.
Thanks for the quick solution! I can confirm that the chat and serve commands do work with the latest nightly.
🐛 Bug
After converting Mistral-Large-2407 and trying to load the model for chatting or serving, the following error is presented:
"(mlc-llm) USER@MBPM3MVLB ~ % mlc_llm serve /Users/USER/LLM/Mistral-Large-Instruct-2407-MLC --port 9999 [2024-09-05 13:46:03] INFO auto_device.py:88: Not found device: cuda:0 [2024-09-05 13:46:04] INFO auto_device.py:88: Not found device: rocm:0 [2024-09-05 13:46:05] INFO auto_device.py:79: Found device: metal:0 [2024-09-05 13:46:05] INFO auto_device.py:88: Not found device: vulkan:0 [2024-09-05 13:46:06] INFO auto_device.py:88: Not found device: opencl:0 [2024-09-05 13:46:06] INFO auto_device.py:35: Using device: metal:0 [2024-09-05 13:46:06] INFO jit.py:43: MLC_JIT_POLICY = ON. Can be one of: ON, OFF, REDO, READONLY [2024-09-05 13:46:06] INFO jit.py:158: Using cached model lib: /Users/USER/.cache/mlc_llm/model_lib/3826dfed383847636248c8e5e540102b.dylib [2024-09-05 13:46:06] INFO engine_base.py:180: The selected engine mode is local. We choose small max batch size and KV cache capacity to use less GPU memory. [2024-09-05 13:46:06] INFO engine_base.py:205: If you don't have concurrent requests and only use the engine interactively, please select mode "interactive". [2024-09-05 13:46:06] INFO engine_base.py:210: If you have high concurrent requests and want to maximize the GPU memory utilization, please select mode "server". [13:46:06] /Users/catalyst/Workspace/mlc-ai-package-self-runner/_work/package/package/mlc-llm/cpp/serve/config.cc:687: Under mode "local", max batch size will be set to 4, max KV cache token capacity will be set to 8192, prefill chunk size will be set to 2048. [13:46:06] /Users/catalyst/Workspace/mlc-ai-package-self-runner/_work/package/package/mlc-llm/cpp/serve/config.cc:687: Under mode "interactive", max batch size will be set to 1, max KV cache token capacity will be set to 32768, prefill chunk size will be set to 2048. [13:46:06] /Users/catalyst/Workspace/mlc-ai-package-self-runner/_work/package/package/mlc-llm/cpp/serve/config.cc:687: Under mode "server", max batch size will be set to 80, max KV cache token capacity will be set to 32768, prefill chunk size will be set to 2048. [13:46:06] /Users/catalyst/Workspace/mlc-ai-package-self-runner/_work/package/package/mlc-llm/cpp/serve/config.cc:768: The actual engine mode is "local". So max batch size is 4, max KV cache token capacity is 8192, prefill chunk size is 2048. [13:46:06] /Users/catalyst/Workspace/mlc-ai-package-self-runner/_work/package/package/mlc-llm/cpp/serve/config.cc:773: Estimated total single GPU memory usage: 70063.542 MB (Parameters: 65776.148 MB. KVCache: 2969.526 MB. Temporary buffer: 1317.867 MB). The actual usage might be slightly larger than the estimated number. Exception in thread Thread-1: Traceback (most recent call last): File "/opt/homebrew/Caskroom/miniconda/base/envs/mlc-llm/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/opt/homebrew/Caskroom/miniconda/base/envs/mlc-llm/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "tvm/_ffi/_cython/./packed_func.pxi", line 339, in tvm._ffi._cy3.core.PackedFuncBase.call File "tvm/_ffi/_cython/./packed_func.pxi", line 270, in tvm._ffi._cy3.core.FuncCall File "tvm/_ffi/_cython/./packed_func.pxi", line 259, in tvm._ffi._cy3.core.FuncCall3 File "tvm/_ffi/_cython/./base.pxi", line 185, in tvm._ffi._cy3.core.CHECK_CALL File "/opt/homebrew/Caskroom/miniconda/base/envs/mlc-llm/lib/python3.12/site-packages/tvm/_ffi/base.py", line 481, in raise_last_ffi_error raise py_err tvm.error.InternalError: Traceback (most recent call last): File "/Users/catalyst/Workspace/mlc-ai-package-self-runner/_work/package/package/tvm/include/tvm/runtime/packed_func.h", line 649 InternalError: Check failed: typecode == kTVMPackedFuncHandle (0 vs. 10) : expected FunctionHandle but got int"
To Reproduce
Steps to reproduce the behavior:
Expected behavior
The model loads and can be served.
Environment
Platform (e.g. WebGPU/Vulkan/IOS/Android/CUDA): MacBook Pro
Operating system (e.g. Ubuntu/Windows/MacOS/...): macOS 15 DP 24A5331b
Device (e.g. iPhone 12 Pro, PC+RTX 3090, ...): M3 Max
How you installed MLC-LLM (
conda
, source): condaHow you installed TVM-Unity (
pip
, source): pipPython version (e.g. 3.10): 3.12
GPU driver version (if applicable): Metal
CUDA/cuDNN version (if applicable): -
TVM Unity Hash Tag (
python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))"
, applicable if you compile models): USE_NVTX: OFF USE_GTEST: AUTO SUMMARIZE: OFF TVM_DEBUG_WITH_ABI_CHANGE: OFF USE_IOS_RPC: OFF USE_MSC: OFF USE_ETHOSU: CUDA_VERSION: NOT-FOUND USE_LIBBACKTRACE: AUTO DLPACK_PATH: 3rdparty/dlpack/include USE_TENSORRT_CODEGEN: OFF USE_THRUST: OFF USE_TARGET_ONNX: OFF USE_AOT_EXECUTOR: ON BUILD_DUMMY_LIBTVM: OFF USE_CUDNN: OFF USE_TENSORRT_RUNTIME: OFF USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF USE_CCACHE: AUTO USE_ARM_COMPUTE_LIB: OFF USE_CPP_RTVM: USE_OPENCL_GTEST: /path/to/opencl/gtest TVM_LOG_BEFORE_THROW: OFF USE_MKL: OFF USE_PT_TVMDSOOP: OFF MLIR_VERSION: NOT-FOUND USE_CLML: OFF USE_STACKVM_RUNTIME: OFF USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF ROCM_PATH: /opt/rocm USE_DNNL: OFF USE_MSCCL: OFF USE_VITIS_AI: OFF USE_MLIR: OFF USE_RCCL: OFF USE_LLVM: llvm-config --link-static USE_VERILATOR: OFF USE_TF_TVMDSOOP: OFF USE_THREADS: ON USE_MSVC_MT: OFF BACKTRACE_ON_SEGFAULT: OFF USE_GRAPH_EXECUTOR: ON USE_NCCL: OFF USE_ROCBLAS: OFF GIT_COMMIT_HASH: 95a3def27b04a203db1918e691dddd394d322978 USE_VULKAN: OFF USE_RUST_EXT: OFF USE_CUTLASS: OFF USE_CPP_RPC: OFF USE_HEXAGON: OFF USE_CUSTOM_LOGGING: OFF USE_UMA: OFF USE_FALLBACK_STL_MAP: OFF USE_SORT: ON USE_RTTI: ON GIT_COMMIT_TIME: 2024-09-03 23:06:10 -0700 USE_HIPBLAS: OFF USE_HEXAGON_SDK: /path/to/sdk USE_BLAS: none USE_ETHOSN: OFF USE_LIBTORCH: OFF USE_RANDOM: ON USE_CUDA: OFF USE_COREML: OFF USE_AMX: OFF BUILD_STATIC_RUNTIME: OFF USE_CMSISNN: OFF USE_KHRONOS_SPIRV: OFF USE_CLML_GRAPH_EXECUTOR: OFF USE_TFLITE: OFF USE_HEXAGON_GTEST: /path/to/hexagon/gtest PICOJSON_PATH: 3rdparty/picojson USE_OPENCL_ENABLE_HOST_PTR: OFF INSTALL_DEV: OFF USE_PROFILER: ON USE_NNPACK: OFF LLVM_VERSION: 17.0.1 USE_MRVL: OFF USE_OPENCL: OFF COMPILER_RT_PATH: 3rdparty/compiler-rt RANG_PATH: 3rdparty/rang/include USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF USE_OPENMP: OFF USE_BNNS: OFF USE_FLASHINFER: USE_CUBLAS: OFF USE_METAL: ON USE_MICRO_STANDALONE_RUNTIME: OFF USE_HEXAGON_EXTERNAL_LIBS: OFF USE_ALTERNATIVE_LINKER: AUTO USE_BYODT_POSIT: OFF USE_NVSHMEM: OFF USE_HEXAGON_RPC: OFF USE_MICRO: OFF DMLC_PATH: 3rdparty/dmlc-core/include INDEX_DEFAULT_I64: ON USE_RELAY_DEBUG: OFF USE_RPC: ON USE_TENSORFLOW_PATH: none TVM_CLML_VERSION: USE_MIOPEN: OFF USE_ROCM: OFF USE_PAPI: OFF USE_CURAND: OFF TVM_CXX_COMPILER_PATH: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ HIDE_PRIVATE_SYMBOLS: ONAny other relevant information:
Additional context
Converted the model myself with mlc_llm convert_weight /Users/USER/LLM/Mistral-Large-Instruct-2407 --quantization q4f16_1 --output /Users/USER/LLM/Mistral-Large-Instruct-2407-MLC Followed by: mlc_llm gen_config /Users/USER/LLM/Mistral-Large-Instruct-2407 --quantization q4f16_1 --output /Users/USER/LLM/Mistral-Large-Instruct-2407-MLC --conv-template mistral_default