InternLM / lmdeploy

LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
https://lmdeploy.readthedocs.io/en/latest/
Apache License 2.0
4.59k stars 419 forks source link

[Bug] lmdeploy - ERROR - run out of tokens. session_id=1 #2319

Open cuong-dyania opened 2 months ago

cuong-dyania commented 2 months ago

Checklist

Describe the bug

The bug arose when I ran lmdeploy with Llama 3 70B during inference. I tried to generate text completion for a very long note (around 8K tokens).

I tried to include gen_config such as gen_config = GenerationConfig(min_new_tokens = 100) but none of them worked.

Reproduction

from transformers import AutoTokenizer, AutoModelForCausalLM from lmdeploy import pipeline, GenerationConfig, TurbomindEngineConfig

model_id = "meta-llama/Meta-Llama-3-70B"

backend_config = TurbomindEngineConfig(cache_max_entry_count=0.2, tp =4) pipe = pipeline(model_id, backend_config=backend_config)

prompts = [' A very long prompt........']

response = pipe(prompts)

I tried to include generation config but the same error still showed up. gen_config = GenerationConfig(min_new_tokens = 100)
response = pipe(prompts, gen_config = gen_config)

Environment

NA

Error traceback

No response

lvhan028 commented 2 months ago

Please kindly paste the information after running lmdeploy check_env

cuong-dyania commented 2 months ago

Thanks for your quick reply, lvhan028. The below is the information that you need:

sys.platform: linux Python: 3.12.4 | packaged by conda-forge | (main, Jun 17 2024, 10:23:07) [GCC 12.3.0] CUDA available: True MUSA available: False numpy_random_seed: 2147483648 GPU 0,1,2,3,4,5,6,7: NVIDIA A100-SXM4-80GB CUDA_HOME: /usr NVCC: Cuda compilation tools, release 12.2, V12.2.140 GCC: gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 PyTorch: 2.3.1+cu121 PyTorch compiling details: PyTorch built with:

TorchVision: 0.18.1+cu121 LMDeploy: 0.5.3+ transformers: 4.41.2 gradio: Not Found fastapi: 0.112.0 pydantic: 2.8.2 triton: 2.3.1 NVIDIA Topology: GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X NV12 NV12 NV12 NV12 NV12 NV12 NV12 0-63,128-191 0 N/A GPU1 NV12 X NV12 NV12 NV12 NV12 NV12 NV12 0-63,128-191 0 N/A GPU2 NV12 NV12 X NV12 NV12 NV12 NV12 NV12 0-63,128-191 0 N/A GPU3 NV12 NV12 NV12 X NV12 NV12 NV12 NV12 0-63,128-191 0 N/A GPU4 NV12 NV12 NV12 NV12 X NV12 NV12 NV12 64-127,192-255 1 N/A GPU5 NV12 NV12 NV12 NV12 NV12 X NV12 NV12 64-127,192-255 1 N/A GPU6 NV12 NV12 NV12 NV12 NV12 NV12 X NV12 64-127,192-255 1 N/A GPU7 NV12 NV12 NV12 NV12 NV12 NV12 NV12 X 64-127,192-255 1 N/A

Legend:

X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks

lvhan028 commented 2 months ago

I suggest using the default cache_max_entry_count. If OOM happens, decrease its value. Regarding the explanation about cache_max_entry_count, please refer to the note in https://lmdeploy.readthedocs.io/en/latest/get_started.html