InternLM / lmdeploy

LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
https://lmdeploy.readthedocs.io/en/latest/
Apache License 2.0
4.53k stars 410 forks source link

[Bug] CUDA runtime error: out of memory /lmdeploy/src/turbomind/utils/memory_utils.cu:32 #2381

Closed AmazDeng closed 1 month ago

AmazDeng commented 2 months ago

Checklist

Describe the bug

I followed the official documentation for InternVL2 and used lmdeploy to load the 40B model(https://internvl.readthedocs.io/en/latest/internvl2.0/deployment.html), but I encountered an error:RuntimeError: [TM][ERROR] CUDA runtime error: out of memory /lmdeploy/src/turbomind/utils/memory_utils.cu:32. My machine is an A100 80G. What could be the issue? lmdeploy officially supports the InternVL2 model.

Reproduction

from lmdeploy import pipeline, TurbomindEngineConfig
from lmdeploy.vl import load_image

model = '/media/star/disk2/pretrained_model/InternVL2-40B'
image = load_image('/media/star/8T/tmp/2.jpg')
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=8192))
response = pipe(('describe this image', image))
print(response.text)

Environment

sys.platform: linux
Python: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0: NVIDIA A100-SXM4-80GB
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.2, V12.2.91
GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
PyTorch: 2.3.1+cu121
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.3.6 (Git Hash 86e6af5974177e513fd3fee58425e1063e7f1361)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 12.1
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 8.9.2
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.3.1, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, 

TorchVision: 0.18.1+cu121
LMDeploy: 0.5.3+
transformers: 4.44.2
gradio: Not Found
fastapi: 0.112.2
pydantic: 2.8.2
triton: 2.3.1
NVIDIA Topology: 
        GPU0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      20-39,60-79     1               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

### Error traceback

```Shell
Error message:

(lmdeploy) star@star-SYS-7049GP-TRT:/media/star/8T/PycharmProjects/github/gpt/InternVL/jupyter$ python lmdeploy_test2.py 
Unrecognized keys in `rope_scaling` for 'rope_type'='dynamic': {'type'}
Unrecognized keys in `rope_scaling` for 'rope_type'='dynamic': {'type'}
Unrecognized keys in `rope_scaling` for 'rope_type'='dynamic': {'type'}
Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in LlamaForCausalLM is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)`
Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in LlamaModel is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)`
Unrecognized keys in `rope_scaling` for 'rope_type'='dynamic': {'type'}
Unrecognized keys in `rope_scaling` for 'rope_type'='dynamic': {'type'}
Unrecognized keys in `rope_scaling` for 'rope_type'='dynamic': {'type'}
Unrecognized keys in `rope_scaling` for 'rope_type'='dynamic': {'type'}
Unrecognized keys in `rope_scaling` for 'rope_type'='dynamic': {'type'}
Unrecognized keys in `rope_scaling` for 'rope_type'='dynamic': {'type'}
Unrecognized keys in `rope_scaling` for 'rope_type'='dynamic': {'type'}
Traceback (most recent call last):
  File "/media/star/8T/PycharmProjects/github/gpt/InternVL/jupyter/lmdeploy_test2.py", line 8, in <module>
    pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=8192))
  File "/home/star/miniconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/api.py", line 89, in pipeline
    return pipeline_class(model_path,
  File "/home/star/miniconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/serve/vl_async_engine.py", line 24, in __init__
    super().__init__(model_path, **kwargs)
  File "/home/star/miniconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/serve/async_engine.py", line 190, in __init__
    self._build_turbomind(model_path=model_path,
  File "/home/star/miniconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/serve/async_engine.py", line 235, in _build_turbomind
    self.engine = tm.TurboMind.from_pretrained(
  File "/home/star/miniconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 340, in from_pretrained
    return cls(model_path=pretrained_model_name_or_path,
  File "/home/star/miniconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 144, in __init__
    self.model_comm = self._from_hf(model_source=model_source,
  File "/home/star/miniconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 251, in _from_hf
    self._create_weight(model_comm)
  File "/home/star/miniconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 170, in _create_weight
    future.result()
  File "/home/star/miniconda3/envs/lmdeploy/lib/python3.10/concurrent/futures/_base.py", line 458, in result
    return self.__get_result()
  File "/home/star/miniconda3/envs/lmdeploy/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/home/star/miniconda3/envs/lmdeploy/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/home/star/miniconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 163, in _create_weight_func
    model_comm.create_shared_weights(device_id, rank)
RuntimeError: [TM][ERROR] CUDA runtime error: out of memory /lmdeploy/src/turbomind/utils/memory_utils.cu:32 
AmazDeng commented 2 months ago

@lvhan028 @AllentDan @grimoire @irexyc @RunningLeon @lzhangzz @zhyncs @zhulinJulia24 @tpoisonooo @pppppM @ispobock @wangruohui @Harold-lkk @HIT-cwh Could you please take a look at this issue?

irexyc commented 2 months ago

Without kv-cache, the 40B model needs about 78G memory to load the weights.

To load and inference the model, I think you shoud use at least two A100 or use the awq quant model https://huggingface.co/OpenGVLab/InternVL2-40B-AWQ

AmazDeng commented 2 months ago

Without kv-cache, the 40B model needs about 78G memory to load the weights.

To load and inference the model, I think you shoud use at least two A100 or use the awq quant model https://huggingface.co/OpenGVLab/InternVL2-40B-AWQ

  1. First, I was able to load the 40B model using PyTorch without any issues, and inference works fine as well. After inference, the VRAM usage increases to about 60GB.
path = '/media/star/disk2/pretrained_model/InternVL2-40B'
device_map = split_model('InternVL2-40B')
model = AutoModel.from_pretrained(
    path,
    torch_dtype=torch.bfloat16,
    load_in_8bit=True,
    low_cpu_mem_usage=True,
    use_flash_attn=True,
    trust_remote_code=True,
    device_map=device_map).eval()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
  1. Secondly, for other GPT models (not InternVL models), such as llava-next-video (7B), after compiling with TensorRT-LLM, they load correctly, and the VRAM usage for the 16-bit model does not exceed the VRAM usage when loaded with the PyTorch engine.

So, after deploying with lmdeploy, shouldn’t the VRAM usage be smaller than when loading the same model with transformers?

  1. Could the VRAM explosion be due to transformers being set to use int8 during loading, whereas lmdeploy did not specify int8?
irexyc commented 2 months ago

When you set load_in_8bit=True, it will use bitsandbytes to quant the model so you can load the model with less gpu memory. Without load_in_8bit=True the AutoModel.from_pretrained will takes up about 77G memory.

In terms of quantized model, gtpq/awq is better than bitsandbytes

github-actions[bot] commented 1 month ago

This issue is marked as stale because it has been marked as invalid or awaiting response for 7 days without any further response. It will be closed in 5 days if the stale label is not removed or if there is no further response.

github-actions[bot] commented 1 month ago

This issue is closed because it has been stale for 5 days. Please open a new issue if you have similar issues or you have any new updates now.