I have dual rtx 3090. Compiled model with command:
mlc_llm compile Llama-3-70B-Instruct-q4f16_1-MLC/mlc-chat-config.json --device cuda --overrides "tensor_parallel_shards=2" -o Llama-3-70B-Instruct-q4f16_1-cuda.so
Ends with error:
ValueError: Error when loading parameters from params_shard_299.bin: [08:29:09] /workspace/tvm/src/runtime/cuda/cuda_device_api.cc:145: InternalError: Check failed: (e == cudaSuccess || e == cudaErrorCudartUnloading) is false: CUDA: out of memory
Watching nvida-smi shows me that it fills up memory in first card, while second is unused. And after filling up it dies with error above.
I can run smaller models but I can't use both GPUs.
[2024-06-03 08:28:24] INFO auto_device.py:79: Found device: cuda:0 [2024-06-03 08:28:24] INFO auto_device.py:79: Found device: cuda:1 [2024-06-03 08:28:25] INFO auto_device.py:88: Not found device: rocm:0 [2024-06-03 08:28:26] INFO auto_device.py:88: Not found device: metal:0 [2024-06-03 08:28:27] INFO auto_device.py:79: Found device: vulkan:0 [2024-06-03 08:28:27] INFO auto_device.py:79: Found device: vulkan:1 [2024-06-03 08:28:28] INFO auto_device.py:88: Not found device: opencl:0 [2024-06-03 08:28:28] INFO auto_device.py:35: Using device: cuda:0 [2024-06-03 08:28:28] INFO engine_base.py:141: Using library model: Llama-3-70B-Instruct-q4f16_1-cuda.so [08:28:29] /workspace/mlc-llm/cpp/serve/config.cc:646: Under mode "local", max batch size will be set to 4, max KV cache token capacity will be set to 2000, prefill chunk size will be set to 2000. [08:28:29] /workspace/mlc-llm/cpp/serve/config.cc:646: Under mode "interactive", max batch size will be set to 1, max KV cache token capacity will be set to 2038, prefill chunk size will be set to 2038. [08:28:29] /workspace/mlc-llm/cpp/serve/config.cc:646: Under mode "server", max batch size will be set to 80, max KV cache token capacity will be set to 1037, prefill chunk size will be set to 2048. [08:28:29] /workspace/mlc-llm/cpp/serve/config.cc:726: The actual engine mode is "interactive". So max batch size is 1, max KV cache token capacity is 2038, prefill chunk size is 2038. [08:28:29] /workspace/mlc-llm/cpp/serve/config.cc:731: Estimated total single GPU memory usage: 20614.709 MB (Parameters: 19489.766 MB. KVCache: 402.956 MB. Temporary buffer: 721.988 MB). The actual usage might be slightly larger than the estimated number. Exception in thread Thread-1: Traceback (most recent call last): File "/home/test/miniconda3/envs/mlc2/lib/python3.11/threading.py", line 1045, in _bootstrap_inner self.run() File "/home/test/miniconda3/envs/mlc2/lib/python3.11/threading.py", line 982, in run self._target(*self._args, **self._kwargs) File "tvm/_ffi/_cython/./packed_func.pxi", line 332, in tvm._ffi._cy3.core.PackedFuncBase.__call__ File "tvm/_ffi/_cython/./packed_func.pxi", line 263, in tvm._ffi._cy3.core.FuncCall File "tvm/_ffi/_cython/./packed_func.pxi", line 252, in tvm._ffi._cy3.core.FuncCall3 File "tvm/_ffi/_cython/./base.pxi", line 182, in tvm._ffi._cy3.core.CHECK_CALL File "/home/test/miniconda3/envs/mlc2/lib/python3.11/site-packages/tvm/_ffi/base.py", line 481, in raise_last_ffi_error raise py_err File "/workspace/mlc-llm/cpp/serve/threaded_engine.cc", line 156, in mlc::llm::serve::ThreadedEngineImpl::RunBackgroundLoop() File "/workspace/mlc-llm/cpp/serve/threaded_engine.cc", line 269, in mlc::llm::serve::ThreadedEngineImpl::EngineReloadImpl(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) File "/workspace/mlc-llm/cpp/serve/engine.cc", line 800, in mlc::llm::serve::Engine::Create(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice, tvm::runtime::TypedPackedFunc<void (tvm::runtime::Array<mlc::llm::serve::RequestStreamOutput, void>)>, tvm::runtime::Optional<mlc::llm::serve::EventTraceRecorder>) File "/workspace/mlc-llm/cpp/serve/engine.cc", line 341, in mlc::llm::serve::EngineImpl::Create(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice, tvm::runtime::TypedPackedFunc<void (tvm::runtime::Array<mlc::llm::serve::RequestStreamOutput, void>)>, tvm::runtime::Optional<mlc::llm::serve::EventTraceRecorder>) File "/workspace/mlc-llm/cpp/serve/model.cc", line 666, in mlc::llm::serve::ModelImpl::LoadParams() File "/workspace/mlc-llm/cpp/serve/function_table.cc", line 176, in mlc::llm::serve::FunctionTable::LoadParams(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice) ValueError: Traceback (most recent call last): 8: mlc::llm::serve::ThreadedEngineImpl::RunBackgroundLoop() at /workspace/mlc-llm/cpp/serve/threaded_engine.cc:156 7: mlc::llm::serve::ThreadedEngineImpl::EngineReloadImpl(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) at /workspace/mlc-llm/cpp/serve/threaded_engine.cc:269 6: mlc::llm::serve::Engine::Create(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice, tvm::runtime::TypedPackedFunc<void (tvm::runtime::Array<mlc::llm::serve::RequestStreamOutput, void>)>, tvm::runtime::Optional<mlc::llm::serve::EventTraceRecorder>) at /workspace/mlc-llm/cpp/serve/engine.cc:800 5: mlc::llm::serve::EngineImpl::Create(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice, tvm::runtime::TypedPackedFunc<void (tvm::runtime::Array<mlc::llm::serve::RequestStreamOutput, void>)>, tvm::runtime::Optional<mlc::llm::serve::EventTraceRecorder>) at /workspace/mlc-llm/cpp/serve/engine.cc:341 4: mlc::llm::serve::ModelImpl::LoadParams() at /workspace/mlc-llm/cpp/serve/model.cc:666 3: mlc::llm::serve::FunctionTable::LoadParams(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice) at /workspace/mlc-llm/cpp/serve/function_table.cc:176 2: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<void (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int)>::AssignTypedLambda<void (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int)>(void (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int), std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) 1: tvm::runtime::relax_vm::NDArrayCache::Load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int) 0: _ZN3tvm7runtime6deta 13: mlc::llm::serve::ThreadedEngineImpl::RunBackgroundLoop() at /workspace/mlc-llm/cpp/serve/threaded_engine.cc:156 12: mlc::llm::serve::ThreadedEngineImpl::EngineReloadImpl(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) at /workspace/mlc-llm/cpp/serve/threaded_engine.cc:269 11: mlc::llm::serve::Engine::Create(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice, tvm::runtime::TypedPackedFunc<void (tvm::runtime::Array<mlc::llm::serve::RequestStreamOutput, void>)>, tvm::runtime::Optional<mlc::llm::serve::EventTraceRecorder>) at /workspace/mlc-llm/cpp/serve/engine.cc:800 10: mlc::llm::serve::EngineImpl::Create(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice, tvm::runtime::TypedPackedFunc<void (tvm::runtime::Array<mlc::llm::serve::RequestStreamOutput, void>)>, tvm::runtime::Optional<mlc::llm::serve::EventTraceRecorder>) at /workspace/mlc-llm/cpp/serve/engine.cc:341 9: mlc::llm::serve::ModelImpl::LoadParams() at /workspace/mlc-llm/cpp/serve/model.cc:666 8: mlc::llm::serve::FunctionTable::LoadParams(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice) at /workspace/mlc-llm/cpp/serve/function_table.cc:176 7: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<void (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int)>::AssignTypedLambda<void (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int)>(void (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int), std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) 6: tvm::runtime::relax_vm::NDArrayCache::Load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int) 5: tvm::runtime::relax_vm::NDArrayCacheMetadata::FileRecord::Load(DLDevice, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, tvm::runtime::Optional<tvm::runtime::NDArray>*) const 4: tvm::runtime::relax_vm::NDArrayCacheMetadata::FileRecord::ParamRecord::Load(DLDevice, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const*, tvm::runtime::Optional<tvm::runtime::NDArray>*) const 3: tvm::runtime::NDArray::Empty(tvm::runtime::ShapeTuple, DLDataType, DLDevice, tvm::runtime::Optional<tvm::runtime::String>) 2: tvm::runtime::DeviceAPI::AllocDataSpace(DLDevice, int, long const*, DLDataType, tvm::runtime::Optional<tvm::runtime::String>) 1: tvm::runtime::CUDADeviceAPI::AllocDataSpace(DLDevice, unsigned long, unsigned long, DLDataType) 0: _ZN3tvm7runtime6deta File "/workspace/tvm/src/runtime/relax_vm/ndarray_cache_support.cc", line 255 ValueError: Error when loading parameters from params_shard_299.bin: [08:29:09] /workspace/tvm/src/runtime/cuda/cuda_device_api.cc:145: InternalError: Check failed: (e == cudaSuccess || e == cudaErrorCudartUnloading) is false: CUDA: out of memory
Environment
Platform: CUDA
Operating system: Pop_OS! 22.04
Device: 2 x RTX 3090
How you installed MLC-LLM: conda create --name mlc2 python=3.11
python -m pip install --pre -U -f https://mlc.ai/wheels mlc-llm-nightly-cu122 mlc-ai-nightly-cu122
Python version: 3.11
GPU driver version: 550.67
CUDA/cuDNN version (if applicable): nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Nov_18_09:45:30_PST_2021
Cuda compilation tools, release 11.5, V11.5.119
Build cuda_11.5.r11.5/compiler.30672275_0
TVM Unity Hash Tag (python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))", applicable if you compile models):
USE_NVTX: OFF USE_GTEST: AUTO SUMMARIZE: OFF TVM_DEBUG_WITH_ABI_CHANGE: OFF USE_IOS_RPC: OFF USE_MSC: OFF USE_ETHOSU: CUDA_VERSION: 12.2 USE_LIBBACKTRACE: AUTO DLPACK_PATH: 3rdparty/dlpack/include USE_TENSORRT_CODEGEN: OFF USE_THRUST: ON USE_TARGET_ONNX: OFF USE_AOT_EXECUTOR: ON BUILD_DUMMY_LIBTVM: OFF USE_CUDNN: OFF USE_TENSORRT_RUNTIME: OFF USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF USE_CCACHE: AUTO USE_ARM_COMPUTE_LIB: OFF USE_CPP_RTVM: USE_OPENCL_GTEST: /path/to/opencl/gtest TVM_LOG_BEFORE_THROW: OFF USE_MKL: OFF USE_PT_TVMDSOOP: OFF MLIR_VERSION: NOT-FOUND USE_CLML: OFF USE_STACKVM_RUNTIME: OFF USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF ROCM_PATH: /opt/rocm USE_DNNL: OFF USE_MSCCL: OFF USE_VITIS_AI: OFF USE_MLIR: OFF USE_RCCL: OFF USE_LLVM: llvm-config --ignore-libllvm --link-static USE_VERILATOR: OFF USE_TF_TVMDSOOP: OFF USE_THREADS: ON USE_MSVC_MT: OFF BACKTRACE_ON_SEGFAULT: OFF USE_GRAPH_EXECUTOR: ON USE_NCCL: ON USE_ROCBLAS: OFF GIT_COMMIT_HASH: e4c51591aad62acf678a77c261cd23aa73a6cc8c USE_VULKAN: ON USE_RUST_EXT: OFF USE_CUTLASS: ON USE_CPP_RPC: OFF USE_HEXAGON: OFF USE_CUSTOM_LOGGING: OFF USE_UMA: OFF USE_FALLBACK_STL_MAP: OFF USE_SORT: ON USE_RTTI: ON GIT_COMMIT_TIME: 2024-05-31 11:22:33 -0400 USE_HEXAGON_SDK: /path/to/sdk USE_BLAS: none USE_ETHOSN: OFF USE_LIBTORCH: OFF USE_RANDOM: ON USE_CUDA: ON USE_COREML: OFF USE_AMX: OFF BUILD_STATIC_RUNTIME: OFF USE_CMSISNN: OFF USE_KHRONOS_SPIRV: OFF USE_CLML_GRAPH_EXECUTOR: OFF USE_TFLITE: OFF USE_HEXAGON_GTEST: /path/to/hexagon/gtest PICOJSON_PATH: 3rdparty/picojson USE_OPENCL_ENABLE_HOST_PTR: OFF INSTALL_DEV: OFF USE_PROFILER: ON USE_NNPACK: OFF LLVM_VERSION: 15.0.7 USE_MRVL: OFF USE_OPENCL: OFF COMPILER_RT_PATH: 3rdparty/compiler-rt RANG_PATH: 3rdparty/rang/include USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF USE_OPENMP: OFF USE_BNNS: OFF USE_FLASHINFER: ON USE_CUBLAS: ON USE_METAL: OFF USE_MICRO_STANDALONE_RUNTIME: OFF USE_HEXAGON_EXTERNAL_LIBS: OFF USE_ALTERNATIVE_LINKER: AUTO USE_BYODT_POSIT: OFF USE_HEXAGON_RPC: OFF USE_MICRO: OFF DMLC_PATH: 3rdparty/dmlc-core/include INDEX_DEFAULT_I64: ON USE_RELAY_DEBUG: OFF USE_RPC: ON USE_TENSORFLOW_PATH: none TVM_CLML_VERSION: USE_MIOPEN: OFF USE_ROCM: OFF USE_PAPI: OFF USE_CURAND: OFF TVM_CXX_COMPILER_PATH: /opt/rh/gcc-toolset-11/root/usr/bin/c++ HIDE_PRIVATE_SYMBOLS: ON
🐛 Bug
I have dual rtx 3090. Compiled model with command: mlc_llm compile Llama-3-70B-Instruct-q4f16_1-MLC/mlc-chat-config.json --device cuda --overrides "tensor_parallel_shards=2" -o Llama-3-70B-Instruct-q4f16_1-cuda.so
Running command: mlc_llm serve Llama-3-70B-Instruct-q4f16_1-MLC --model-lib Llama-3-70B-Instruct-q4f16_1-cuda.so --host 0.0.0.0
Ends with error: ValueError: Error when loading parameters from params_shard_299.bin: [08:29:09] /workspace/tvm/src/runtime/cuda/cuda_device_api.cc:145: InternalError: Check failed: (e == cudaSuccess || e == cudaErrorCudartUnloading) is false: CUDA: out of memory
Watching nvida-smi shows me that it fills up memory in first card, while second is unused. And after filling up it dies with error above. I can run smaller models but I can't use both GPUs.
What am I missing? how to use both GPUs?
To Reproduce
Steps to reproduce the behavior:
[2024-06-03 08:28:24] INFO auto_device.py:79: Found device: cuda:0 [2024-06-03 08:28:24] INFO auto_device.py:79: Found device: cuda:1 [2024-06-03 08:28:25] INFO auto_device.py:88: Not found device: rocm:0 [2024-06-03 08:28:26] INFO auto_device.py:88: Not found device: metal:0 [2024-06-03 08:28:27] INFO auto_device.py:79: Found device: vulkan:0 [2024-06-03 08:28:27] INFO auto_device.py:79: Found device: vulkan:1 [2024-06-03 08:28:28] INFO auto_device.py:88: Not found device: opencl:0 [2024-06-03 08:28:28] INFO auto_device.py:35: Using device: cuda:0 [2024-06-03 08:28:28] INFO engine_base.py:141: Using library model: Llama-3-70B-Instruct-q4f16_1-cuda.so [08:28:29] /workspace/mlc-llm/cpp/serve/config.cc:646: Under mode "local", max batch size will be set to 4, max KV cache token capacity will be set to 2000, prefill chunk size will be set to 2000. [08:28:29] /workspace/mlc-llm/cpp/serve/config.cc:646: Under mode "interactive", max batch size will be set to 1, max KV cache token capacity will be set to 2038, prefill chunk size will be set to 2038. [08:28:29] /workspace/mlc-llm/cpp/serve/config.cc:646: Under mode "server", max batch size will be set to 80, max KV cache token capacity will be set to 1037, prefill chunk size will be set to 2048. [08:28:29] /workspace/mlc-llm/cpp/serve/config.cc:726: The actual engine mode is "interactive". So max batch size is 1, max KV cache token capacity is 2038, prefill chunk size is 2038. [08:28:29] /workspace/mlc-llm/cpp/serve/config.cc:731: Estimated total single GPU memory usage: 20614.709 MB (Parameters: 19489.766 MB. KVCache: 402.956 MB. Temporary buffer: 721.988 MB). The actual usage might be slightly larger than the estimated number. Exception in thread Thread-1: Traceback (most recent call last): File "/home/test/miniconda3/envs/mlc2/lib/python3.11/threading.py", line 1045, in _bootstrap_inner self.run() File "/home/test/miniconda3/envs/mlc2/lib/python3.11/threading.py", line 982, in run self._target(*self._args, **self._kwargs) File "tvm/_ffi/_cython/./packed_func.pxi", line 332, in tvm._ffi._cy3.core.PackedFuncBase.__call__ File "tvm/_ffi/_cython/./packed_func.pxi", line 263, in tvm._ffi._cy3.core.FuncCall File "tvm/_ffi/_cython/./packed_func.pxi", line 252, in tvm._ffi._cy3.core.FuncCall3 File "tvm/_ffi/_cython/./base.pxi", line 182, in tvm._ffi._cy3.core.CHECK_CALL File "/home/test/miniconda3/envs/mlc2/lib/python3.11/site-packages/tvm/_ffi/base.py", line 481, in raise_last_ffi_error raise py_err File "/workspace/mlc-llm/cpp/serve/threaded_engine.cc", line 156, in mlc::llm::serve::ThreadedEngineImpl::RunBackgroundLoop() File "/workspace/mlc-llm/cpp/serve/threaded_engine.cc", line 269, in mlc::llm::serve::ThreadedEngineImpl::EngineReloadImpl(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) File "/workspace/mlc-llm/cpp/serve/engine.cc", line 800, in mlc::llm::serve::Engine::Create(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice, tvm::runtime::TypedPackedFunc<void (tvm::runtime::Array<mlc::llm::serve::RequestStreamOutput, void>)>, tvm::runtime::Optional<mlc::llm::serve::EventTraceRecorder>) File "/workspace/mlc-llm/cpp/serve/engine.cc", line 341, in mlc::llm::serve::EngineImpl::Create(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice, tvm::runtime::TypedPackedFunc<void (tvm::runtime::Array<mlc::llm::serve::RequestStreamOutput, void>)>, tvm::runtime::Optional<mlc::llm::serve::EventTraceRecorder>) File "/workspace/mlc-llm/cpp/serve/model.cc", line 666, in mlc::llm::serve::ModelImpl::LoadParams() File "/workspace/mlc-llm/cpp/serve/function_table.cc", line 176, in mlc::llm::serve::FunctionTable::LoadParams(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice) ValueError: Traceback (most recent call last): 8: mlc::llm::serve::ThreadedEngineImpl::RunBackgroundLoop() at /workspace/mlc-llm/cpp/serve/threaded_engine.cc:156 7: mlc::llm::serve::ThreadedEngineImpl::EngineReloadImpl(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) at /workspace/mlc-llm/cpp/serve/threaded_engine.cc:269 6: mlc::llm::serve::Engine::Create(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice, tvm::runtime::TypedPackedFunc<void (tvm::runtime::Array<mlc::llm::serve::RequestStreamOutput, void>)>, tvm::runtime::Optional<mlc::llm::serve::EventTraceRecorder>) at /workspace/mlc-llm/cpp/serve/engine.cc:800 5: mlc::llm::serve::EngineImpl::Create(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice, tvm::runtime::TypedPackedFunc<void (tvm::runtime::Array<mlc::llm::serve::RequestStreamOutput, void>)>, tvm::runtime::Optional<mlc::llm::serve::EventTraceRecorder>) at /workspace/mlc-llm/cpp/serve/engine.cc:341 4: mlc::llm::serve::ModelImpl::LoadParams() at /workspace/mlc-llm/cpp/serve/model.cc:666 3: mlc::llm::serve::FunctionTable::LoadParams(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice) at /workspace/mlc-llm/cpp/serve/function_table.cc:176 2: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<void (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int)>::AssignTypedLambda<void (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int)>(void (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int), std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) 1: tvm::runtime::relax_vm::NDArrayCache::Load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int) 0: _ZN3tvm7runtime6deta 13: mlc::llm::serve::ThreadedEngineImpl::RunBackgroundLoop() at /workspace/mlc-llm/cpp/serve/threaded_engine.cc:156 12: mlc::llm::serve::ThreadedEngineImpl::EngineReloadImpl(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) at /workspace/mlc-llm/cpp/serve/threaded_engine.cc:269 11: mlc::llm::serve::Engine::Create(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice, tvm::runtime::TypedPackedFunc<void (tvm::runtime::Array<mlc::llm::serve::RequestStreamOutput, void>)>, tvm::runtime::Optional<mlc::llm::serve::EventTraceRecorder>) at /workspace/mlc-llm/cpp/serve/engine.cc:800 10: mlc::llm::serve::EngineImpl::Create(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice, tvm::runtime::TypedPackedFunc<void (tvm::runtime::Array<mlc::llm::serve::RequestStreamOutput, void>)>, tvm::runtime::Optional<mlc::llm::serve::EventTraceRecorder>) at /workspace/mlc-llm/cpp/serve/engine.cc:341 9: mlc::llm::serve::ModelImpl::LoadParams() at /workspace/mlc-llm/cpp/serve/model.cc:666 8: mlc::llm::serve::FunctionTable::LoadParams(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice) at /workspace/mlc-llm/cpp/serve/function_table.cc:176 7: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<void (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int)>::AssignTypedLambda<void (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int)>(void (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int), std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) 6: tvm::runtime::relax_vm::NDArrayCache::Load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int) 5: tvm::runtime::relax_vm::NDArrayCacheMetadata::FileRecord::Load(DLDevice, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, tvm::runtime::Optional<tvm::runtime::NDArray>*) const 4: tvm::runtime::relax_vm::NDArrayCacheMetadata::FileRecord::ParamRecord::Load(DLDevice, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const*, tvm::runtime::Optional<tvm::runtime::NDArray>*) const 3: tvm::runtime::NDArray::Empty(tvm::runtime::ShapeTuple, DLDataType, DLDevice, tvm::runtime::Optional<tvm::runtime::String>) 2: tvm::runtime::DeviceAPI::AllocDataSpace(DLDevice, int, long const*, DLDataType, tvm::runtime::Optional<tvm::runtime::String>) 1: tvm::runtime::CUDADeviceAPI::AllocDataSpace(DLDevice, unsigned long, unsigned long, DLDataType) 0: _ZN3tvm7runtime6deta File "/workspace/tvm/src/runtime/relax_vm/ndarray_cache_support.cc", line 255 ValueError: Error when loading parameters from params_shard_299.bin: [08:29:09] /workspace/tvm/src/runtime/cuda/cuda_device_api.cc:145: InternalError: Check failed: (e == cudaSuccess || e == cudaErrorCudartUnloading) is false: CUDA: out of memory
Environment
Platform: CUDA
Operating system: Pop_OS! 22.04
Device: 2 x RTX 3090
How you installed MLC-LLM: conda create --name mlc2 python=3.11 python -m pip install --pre -U -f https://mlc.ai/wheels mlc-llm-nightly-cu122 mlc-ai-nightly-cu122
Python version: 3.11
GPU driver version: 550.67
CUDA/cuDNN version (if applicable): nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Thu_Nov_18_09:45:30_PST_2021 Cuda compilation tools, release 11.5, V11.5.119 Build cuda_11.5.r11.5/compiler.30672275_0
TVM Unity Hash Tag (
python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))"
, applicable if you compile models):USE_NVTX: OFF USE_GTEST: AUTO SUMMARIZE: OFF TVM_DEBUG_WITH_ABI_CHANGE: OFF USE_IOS_RPC: OFF USE_MSC: OFF USE_ETHOSU: CUDA_VERSION: 12.2 USE_LIBBACKTRACE: AUTO DLPACK_PATH: 3rdparty/dlpack/include USE_TENSORRT_CODEGEN: OFF USE_THRUST: ON USE_TARGET_ONNX: OFF USE_AOT_EXECUTOR: ON BUILD_DUMMY_LIBTVM: OFF USE_CUDNN: OFF USE_TENSORRT_RUNTIME: OFF USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF USE_CCACHE: AUTO USE_ARM_COMPUTE_LIB: OFF USE_CPP_RTVM: USE_OPENCL_GTEST: /path/to/opencl/gtest TVM_LOG_BEFORE_THROW: OFF USE_MKL: OFF USE_PT_TVMDSOOP: OFF MLIR_VERSION: NOT-FOUND USE_CLML: OFF USE_STACKVM_RUNTIME: OFF USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF ROCM_PATH: /opt/rocm USE_DNNL: OFF USE_MSCCL: OFF USE_VITIS_AI: OFF USE_MLIR: OFF USE_RCCL: OFF USE_LLVM: llvm-config --ignore-libllvm --link-static USE_VERILATOR: OFF USE_TF_TVMDSOOP: OFF USE_THREADS: ON USE_MSVC_MT: OFF BACKTRACE_ON_SEGFAULT: OFF USE_GRAPH_EXECUTOR: ON USE_NCCL: ON USE_ROCBLAS: OFF GIT_COMMIT_HASH: e4c51591aad62acf678a77c261cd23aa73a6cc8c USE_VULKAN: ON USE_RUST_EXT: OFF USE_CUTLASS: ON USE_CPP_RPC: OFF USE_HEXAGON: OFF USE_CUSTOM_LOGGING: OFF USE_UMA: OFF USE_FALLBACK_STL_MAP: OFF USE_SORT: ON USE_RTTI: ON GIT_COMMIT_TIME: 2024-05-31 11:22:33 -0400 USE_HEXAGON_SDK: /path/to/sdk USE_BLAS: none USE_ETHOSN: OFF USE_LIBTORCH: OFF USE_RANDOM: ON USE_CUDA: ON USE_COREML: OFF USE_AMX: OFF BUILD_STATIC_RUNTIME: OFF USE_CMSISNN: OFF USE_KHRONOS_SPIRV: OFF USE_CLML_GRAPH_EXECUTOR: OFF USE_TFLITE: OFF USE_HEXAGON_GTEST: /path/to/hexagon/gtest PICOJSON_PATH: 3rdparty/picojson USE_OPENCL_ENABLE_HOST_PTR: OFF INSTALL_DEV: OFF USE_PROFILER: ON USE_NNPACK: OFF LLVM_VERSION: 15.0.7 USE_MRVL: OFF USE_OPENCL: OFF COMPILER_RT_PATH: 3rdparty/compiler-rt RANG_PATH: 3rdparty/rang/include USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF USE_OPENMP: OFF USE_BNNS: OFF USE_FLASHINFER: ON USE_CUBLAS: ON USE_METAL: OFF USE_MICRO_STANDALONE_RUNTIME: OFF USE_HEXAGON_EXTERNAL_LIBS: OFF USE_ALTERNATIVE_LINKER: AUTO USE_BYODT_POSIT: OFF USE_HEXAGON_RPC: OFF USE_MICRO: OFF DMLC_PATH: 3rdparty/dmlc-core/include INDEX_DEFAULT_I64: ON USE_RELAY_DEBUG: OFF USE_RPC: ON USE_TENSORFLOW_PATH: none TVM_CLML_VERSION: USE_MIOPEN: OFF USE_ROCM: OFF USE_PAPI: OFF USE_CURAND: OFF TVM_CXX_COMPILER_PATH: /opt/rh/gcc-toolset-11/root/usr/bin/c++ HIDE_PRIVATE_SYMBOLS: ON