mlc-ai / mlc-llm

Universal LLM Deployment Engine with ML Compilation
https://llm.mlc.ai/
Apache License 2.0
19.25k stars 1.58k forks source link

[Bug]: Error on "The block is 1-time referenced by other blocks, thus cannot accept new KV values." #2422

Closed neubig closed 6 months ago

neubig commented 6 months ago

🐛 Bug

When serving a model through the REST API on an 8xA6000 machine I get this error: The block is 1-time referenced by other blocks, thus cannot accept new KV values.

I've added the relevant details below.

To Reproduce

Steps to reproduce the behavior on a machine with 8 A6000 GPUs:

$ git clone https://huggingface.co/gradientai/Llama-3-70B-Instruct-Gradient-1048k
$ mlc_llm convert_weight Llama-3-70B-Instruct-Gradient-1048k/ --quantization q4f16_1 -o Llama-3-70B-Instruct-Gradient-1048k-q4f16_1-MLC
$ mlc_llm gen_config Llama-3-70B-Instruct-Gradient-1048k/ --quantization q4f16_1 --conv-template redpajama_chat -o Llama-3-70B-Instruct-Gradient-1048k-q4f16_1-MLC/
$ mlc_llm serve Llama-3-70B-Instruct-Gradient-1048k-q4f16_1-MLC

Then hit it with a relatively long context, replace "..." with something with a reasonable number of tokens:

import litellm

response = litellm.completion(
    model="openai/Llama-3-70B-Instruct-Gradient-1048k-q4f16_1-MLC",               # add `openai/` prefix to model so litellm knows to route to OpenAI
    api_key="sk-1234",                  # api key to your openai compatible endpoint
    api_base="http://127.0.0.1:8080/v1",     # set API Base of your Custom OpenAI Endpoint
    messages=[
                {
                    "role": "user",
                    "content": "...",
                }
    ],
)
print(response)

Here is the full stack trace.

Exception in thread Thread-1:
Traceback (most recent call last):
  File "/home/gneubig/anaconda3/envs/mlc_llm/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
    self.run()
  File "/home/gneubig/anaconda3/envs/mlc_llm/lib/python3.11/threading.py", line 982, in run
    self._target(*self._args, **self._kwargs)
  File "tvm/_ffi/_cython/./packed_func.pxi", line 332, in tvm._ffi._cy3.core.PackedFuncBase.__call__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 263, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./packed_func.pxi", line 252, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 182, in tvm._ffi._cy3.core.CHECK_CALL
  File "/home/gneubig/anaconda3/envs/mlc_llm/lib/python3.11/site-packages/tvm/_ffi/base.py", line 481, in raise_last_ffi_error
    raise py_err
  File "/workspace/mlc-llm/cpp/serve/threaded_engine.cc", line 168, in mlc::llm::serve::ThreadedEngineImpl::RunBackgroundLoop()
  File "/workspace/mlc-llm/cpp/serve/engine.cc", line 365, in mlc::llm::serve::EngineImpl::Step()
  File "/workspace/mlc-llm/cpp/serve/engine_actions/new_request_prefill.cc", line 116, in mlc::llm::serve::NewRequestPrefillActionObj::Step(mlc::llm::serve::EngineState)
  File "/workspace/mlc-llm/cpp/serve/model.cc", line 230, in mlc::llm::serve::ModelImpl::BatchPrefill(tvm::runtime::ObjectRef const&, std::vector<long, std::allocator<long> > const&, std::vector<int, std::allocator<int> > const&)
tvm._ffi.base.TVMError: Traceback (most recent call last):
  7: mlc::llm::serve::ThreadedEngineImpl::RunBackgroundLoop()
        at /workspace/mlc-llm/cpp/serve/threaded_engine.cc:168
  6: mlc::llm::serve::EngineImpl::Step()
        at /workspace/mlc-llm/cpp/serve/engine.cc:365
  5: mlc::llm::serve::NewRequestPrefillActionObj::Step(mlc::llm::serve::EngineState)
        at /workspace/mlc-llm/cpp/serve/engine_actions/new_request_prefill.cc:116
  4: mlc::llm::serve::ModelImpl::BatchPrefill(tvm::runtime::ObjectRef const&, std::vector<long, std::allocator<long> > const&, std::vector<int, std::allocator<int> > const&)
        at /workspace/mlc-llm/cpp/serve/model.cc:230
  3: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<void (tvm::runtime::relax_vm::KVState, tvm::runtime::ShapeTuple const&, tvm::runtime::ShapeTuple const&)>::AssignTypedLambda<tvm::runtime::Registry::set_body_method<tvm::runtime::relax_vm::KVState, tvm::runtime::relax_vm::KVStateObj, void, tvm::runtime::ShapeTuple const&, tvm::runtime::ShapeTuple const&, void>(void (tvm::runtime::relax_vm::KVStateObj::*)(tvm::runtime::ShapeTuple const&, tvm::runtime::ShapeTuple const&))::{lambda(tvm::runtime::relax_vm::KVState, tvm::runtime::ShapeTuple const&, tvm::runtime::ShapeTuple const&)#1}>(tvm::runtime::Registry::set_body_method<tvm::runtime::relax_vm::KVState, tvm::runtime::relax_vm::KVStateObj, void, tvm::runtime::ShapeTuple const&, tvm::runtime::ShapeTuple const&, void>(void (tvm::runtime::relax_vm::KVStateObj::*)(tvm::runtime::ShapeTuple const&, tvm::runtime::ShapeTuple const&))::{lambda(tvm::runtime::relax_vm::KVState, tvm::runtime::ShapeTuple const&, tvm::runtime::ShapeTuple const&)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
  2: tvm::runtime::relax_vm::PagedAttentionKVCacheObj::BeginForward(tvm::runtime::ShapeTuple const&, tvm::runtime::ShapeTuple const&)
  1: tvm::runtime::relax_vm::PagedAttentionKVCacheObj::ReserveAppendLengthInSeq(tvm::runtime::relax_vm::Sequence*, long)
  0: _ZN3tvm7runtime6deta
  File "/workspace/tvm/src/runtime/relax_vm/paged_kv_cache.cc", line 1448
TVMError: Check failed: block.external_ref_cnt == 0 (1 vs. 0) : The block is 1-time referenced by other blocks, thus cannot accept new KV values.

Expected behavior

Environment

USE_NVTX: OFF
USE_GTEST: AUTO
SUMMARIZE: OFF
TVM_DEBUG_WITH_ABI_CHANGE: OFF
USE_IOS_RPC: OFF
USE_MSC: OFF
USE_ETHOSU: 
CUDA_VERSION: 12.2
USE_LIBBACKTRACE: AUTO
DLPACK_PATH: 3rdparty/dlpack/include
USE_TENSORRT_CODEGEN: OFF
USE_THRUST: ON
USE_TARGET_ONNX: OFF
USE_AOT_EXECUTOR: ON
BUILD_DUMMY_LIBTVM: OFF
USE_CUDNN: OFF
USE_TENSORRT_RUNTIME: OFF
USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF
USE_CCACHE: AUTO
USE_ARM_COMPUTE_LIB: OFF
USE_CPP_RTVM: 
USE_OPENCL_GTEST: /path/to/opencl/gtest
TVM_LOG_BEFORE_THROW: OFF
USE_MKL: OFF
USE_PT_TVMDSOOP: OFF
MLIR_VERSION: NOT-FOUND
USE_CLML: OFF
USE_STACKVM_RUNTIME: OFF
USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF
ROCM_PATH: /opt/rocm
USE_DNNL: OFF
USE_MSCCL: OFF
USE_VITIS_AI: OFF
USE_MLIR: OFF
USE_RCCL: OFF
USE_LLVM: llvm-config --ignore-libllvm --link-static
USE_VERILATOR: OFF
USE_TF_TVMDSOOP: OFF
USE_THREADS: ON
USE_MSVC_MT: OFF
BACKTRACE_ON_SEGFAULT: OFF
USE_GRAPH_EXECUTOR: ON
USE_NCCL: ON
USE_ROCBLAS: OFF
GIT_COMMIT_HASH: ce58d63453ff83b930fa2be665647621b2eec4d2
USE_VULKAN: ON
USE_RUST_EXT: OFF
USE_CUTLASS: ON
USE_CPP_RPC: OFF
USE_HEXAGON: OFF
USE_CUSTOM_LOGGING: OFF
USE_UMA: OFF
USE_FALLBACK_STL_MAP: OFF
USE_SORT: ON
USE_RTTI: ON
GIT_COMMIT_TIME: 2024-05-15 01:49:20 -0400
USE_HEXAGON_SDK: /path/to/sdk
USE_BLAS: none
USE_ETHOSN: OFF
USE_LIBTORCH: OFF
USE_RANDOM: ON
USE_CUDA: ON
USE_COREML: OFF
USE_AMX: OFF
BUILD_STATIC_RUNTIME: OFF
USE_CMSISNN: OFF
USE_KHRONOS_SPIRV: OFF
USE_CLML_GRAPH_EXECUTOR: OFF
USE_TFLITE: OFF
USE_HEXAGON_GTEST: /path/to/hexagon/gtest
PICOJSON_PATH: 3rdparty/picojson
USE_OPENCL_ENABLE_HOST_PTR: OFF
INSTALL_DEV: OFF
USE_PROFILER: ON
USE_NNPACK: OFF
LLVM_VERSION: 15.0.7
USE_MRVL: OFF
USE_OPENCL: OFF
COMPILER_RT_PATH: 3rdparty/compiler-rt
RANG_PATH: 3rdparty/rang/include
USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF
USE_OPENMP: OFF
USE_BNNS: OFF
USE_FLASHINFER: ON
USE_CUBLAS: ON
USE_METAL: OFF
USE_MICRO_STANDALONE_RUNTIME: OFF
USE_HEXAGON_EXTERNAL_LIBS: OFF
USE_ALTERNATIVE_LINKER: AUTO
USE_BYODT_POSIT: OFF
USE_HEXAGON_RPC: OFF
USE_MICRO: OFF
DMLC_PATH: 3rdparty/dmlc-core/include
INDEX_DEFAULT_I64: ON
USE_RELAY_DEBUG: OFF
USE_RPC: ON
USE_TENSORFLOW_PATH: none
TVM_CLML_VERSION: 
USE_MIOPEN: OFF
USE_ROCM: OFF
USE_PAPI: OFF
USE_CURAND: OFF
TVM_CXX_COMPILER_PATH: /opt/rh/gcc-toolset-11/root/usr/bin/c++
HIDE_PRIVATE_SYMBOLS: ON
MasterJH5574 commented 6 months ago

Hi @neubig! This is caused by a recent update on TVM side (here's a same error in this thread https://github.com/mlc-ai/mlc-llm/issues/2386). Ideally you can resolve that by updating to the latest TVM via python -m pip install --pre -U -f https://mlc.ai/wheels mlc-ai-nightly-cu122.

neubig commented 6 months ago

OK, great thanks! I haven't had a chance to test yet, but I trust that this has been fixed. I'll reopen if not.