vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
31.02k stars 4.71k forks source link

ValueError: The model's max seq len (4096) is larger than the maximum number of tokens that can be stored in KV cache (3664). Try increasing `gpu_memory_utilization` or decreasing `max_model_len` when initializing the engine.` #2418

Open handsomelys opened 10 months ago

handsomelys commented 10 months ago

I followed the Quickstart tutorial and deployed the Chinese-llama-alpaca-2 model using vllm, and I got the following error. ***@***:~/Code/experiment/***/ToG$ CUDA_VISIBLE_DEVICES=0 python load_llm.py INFO 01-11 15:51:02 llm_engine.py:70] Initializing an LLM engine with config: model='/home/***/***/models/alpaca-2', tokenizer='/home/***/***/models/alpaca-2', tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=4096, download_dir=None, load_format=auto, tensor_parallel_size=1, quantization=None, enforce_eager=False, seed=0) INFO 01-11 15:51:18 llm_engine.py:275] # GPU blocks: 229, # CPU blocks: 512 Traceback (most recent call last): File "load_llm.py", line 8, in <module> llm = LLM(model='/home/***/***/models/alpaca-2') File "/home/***/anaconda3/envs/lys-llm-env/lib/python3.8/site-packages/vllm/entrypoints/llm.py", line 105, in __init__ self.llm_engine = LLMEngine.from_engine_args(engine_args) File "/home/***/anaconda3/envs/lys-llm-env/lib/python3.8/site-packages/vllm/engine/llm_engine.py", line 309, in from_engine_args engine = cls(*engine_configs, File "/home/***/anaconda3/envs/lys-llm-env/lib/python3.8/site-packages/vllm/engine/llm_engine.py", line 114, in __init__ self._init_cache() File "/home/***/anaconda3/envs/lys-llm-env/lib/python3.8/site-packages/vllm/engine/llm_engine.py", line 284, in _init_cache raise ValueError( ValueError: The model's max seq len (4096) is larger than the maximum number of tokens that can be stored in KV cache (3664). Try increasinggpu_memory_utilizationor decreasingmax_model_lenwhen initializing the engine.

my code is:

from vllm import LLM, SamplingParams

prompts = [
    "hello, who is you?",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model='/home/b3432/***/models/alpaca-2')
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Genrate text: {generated_text!r}")

What's going on and what do I need to do to fix the error? I run the code with RTX3090(24G) * 1. Looking forward to a reply!

chopin1998 commented 10 months ago

same error..

set gpu_memory_utilization=0.75 and low max_model_len ,

but resp is too short...

ishand0101 commented 10 months ago

Having the same issue running CodeLLaMa 13b instruct hf with the langchain integration for vLLM.

The model's max seq len (16384) is larger than the maximum number of tokens that can be stored in KV cache (11408). Try increasinggpu_memory_utilizationor decreasingmax_model_lenwhen initializing the engine. (type=value_error)

byerose commented 10 months ago

same error.

gree2 commented 10 months ago

ValueError: The model's max seq len (32768) is larger than the maximum number of tokens that can be stored in KV cache (26064). Try increasing gpu_memory_utilization or decreasing max_model_len when initializing the engine.

Mistral-7B-v0.1

aklakl commented 10 months ago

Same exception with ValueError: The model's max seq len (2048) is larger than the maximum number of tokens that can be stored in KV cache (176). Try increasinggpu_memory_utilizationor decreasingmax_model_lenwhen initializing the engine.

byerose commented 10 months ago

Same exception with ValueError: The model's max seq len (2048) is larger than the maximum number of tokens that can be stored in KV cache (176). Try increasinggpu_memory_utilizationor decreasingmax_model_lenwhen initializing the engine.

Set max_model_len< KV cache. It works.

AI-General commented 10 months ago

I wrote fixed value max_model_len.

vllm/config.py: 104

self.max_model_len = _get_and_verify_max_len(self.hf_config,

    #                                              max_model_len)
    self.max_model_len = 4096
ZhangzihanGit commented 9 months ago

I have the same issue here

silvacarl2 commented 9 months ago

i am haivng this problem with this:

python -m vllm.entrypoints.openai.api_server --model abacusai/Smaug-72B-v0.1 --tensor-parallel-size 4 --trust-remote-code --gpu-memory-utilization 0.9 --host 0.0.0.0 --port 9002

but we get this:

ValueError: The model's max seq len (32768) is larger than the maximum number of tokens that can be stored in KV cache (8512). Try increasing gpu_memory_utilization or decreasing max_model_len when initializing the engine.

is there a work around to launch this form the command line?

mhillebrand commented 9 months ago

i am haivng this problem with this:

python -m vllm.entrypoints.openai.api_server --model abacusai/Smaug-72B-v0.1 --tensor-parallel-size 4 --trust-remote-code --gpu-memory-utilization 0.9 --host 0.0.0.0 --port 9002

but we get this:

ValueError: The model's max seq len (32768) is larger than the maximum number of tokens that can be stored in KV cache (8512). Try increasing gpu_memory_utilization or decreasing max_model_len when initializing the engine.

is there a work around to launch this form the command line?

Yes, it looks like you can add --max_model_len 4096 to your command.

https://github.com/vllm-project/vllm/blob/e433c115bce2bf27f7b1abdde7029566007d9eee/vllm/engine/arg_utils.py#L22

silvacarl2 commented 9 months ago

thx, will try that!

mhillebrand commented 9 months ago

Oops. You'll wanna use hypens and not underscores.

https://github.com/vllm-project/vllm/blob/e433c115bce2bf27f7b1abdde7029566007d9eee/vllm/engine/arg_utils.py#L143

silvacarl2 commented 9 months ago

yup found that LOL!

ElinLiu0 commented 9 months ago

Same error,same solving way,weird.. Why'd they have initalized this variables too large?

Nuclear6 commented 9 months ago

Is there a solution to this problem now? I still encounter this problem on gemma-7b.

ElinLiu0 commented 9 months ago

Is there a solution to this problem now? I still encounter this problem on gemma-7b.

Maybe try a lower model length should be fine,just keep watching the logs then makes the Q,K,V cache on your machine still remaining will your hosting your localized gemma.

Nuclear6 commented 9 months ago

Is there a solution to this problem now? I still encounter this problem on gemma-7b.现在这个问题有解决办法吗?我在 gemma-7b 上仍然遇到这个问题。

Maybe try a lower model length should be fine,just keep watching the logs then makes the Q,K,V cache on your machine still remaining will your hosting your localized gemma.也许尝试较低的模型长度应该没问题,只需继续观察日志,然后使您计算机上的 Q,K,V 缓存仍然保留,以便托管您的本地化 Gemma。

The document states that the gemma-7b model is supported, and many other large models are supported. Is it because of the machine configuration? This is an RTX4090 desktop computer.

ElinLiu0 commented 9 months ago

Is there a solution to this problem now? I still encounter this problem on gemma-7b.现在这个问题有解决办法吗?我在 gemma-7b 上仍然遇到这个问题。

Maybe try a lower model length should be fine,just keep watching the logs then makes the Q,K,V cache on your machine still remaining will your hosting your localized gemma.也许尝试较低的模型长度应该没问题,只需继续观察日志,然后使您计算机上的 Q,K,V 缓存仍然保留,以便托管您的本地化 Gemma。

The document states that the gemma-7b model is supported, and many other large models are supported. Is it because of the machine configuration? This is an RTX4090 desktop computer.

No idea of that mate,i'm current using AliCloud Qwen1.5-7B-INT4,by seting model_length into 1024,it's working fine as expect.

Nuclear6 commented 9 months ago

Is there a solution to this problem now? I still encounter this problem on gemma-7b.现在这个问题有解决办法吗?我在 gemma-7b 上仍然遇到这个问题。

Maybe try a lower model length should be fine,just keep watching the logs then makes the Q,K,V cache on your machine still remaining will your hosting your localized gemma.也许尝试较低的模型长度应该没问题,只需继续观察日志,然后使您计算机上的 Q,K,V 缓存仍然保留,以便托管您的本地化 Gemma。

The document states that the gemma-7b model is supported, and many other large models are supported. Is it because of the machine configuration? This is an RTX4090 desktop computer.文档指出支持gemma-7b模型,还支持很多其他大型模型。是机器配置的原因吗?这是一台 RTX4090 台式电脑。

No idea of that mate,i'm current using AliCloud Qwen1.5-7B-INT4,by seting model_length into 1024,it's working fine as expect.不知道那个伙伴,我目前使用阿里云 Qwen1.5-7B-INT4,通过将 model_length 设置为 1024,它按预期工作正常。

My guess is that the machine configuration is incorrect.

image image
ElinLiu0 commented 9 months ago

Is there a solution to this problem now? I still encounter this problem on gemma-7b.现在这个问题有解决办法吗?我在 gemma-7b 上仍然遇到这个问题。

Maybe try a lower model length should be fine,just keep watching the logs then makes the Q,K,V cache on your machine still remaining will your hosting your localized gemma.也许尝试较低的模型长度应该没问题,只需继续观察日志,然后使您计算机上的 Q,K,V 缓存仍然保留,以便托管您的本地化 Gemma。

The document states that the gemma-7b model is supported, and many other large models are supported. Is it because of the machine configuration? This is an RTX4090 desktop computer.文档指出支持gemma-7b模型,还支持很多其他大型模型。是机器配置的原因吗?这是一台 RTX4090 台式电脑。

No idea of that mate,i'm current using AliCloud Qwen1.5-7B-INT4,by seting model_length into 1024,it's working fine as expect.不知道那个伙伴,我目前使用阿里云 Qwen1.5-7B-INT4,通过将 model_length 设置为 1024,它按预期工作正常。

My guess is that the machine configuration is incorrect. image

image

What's your tool using now,looks pretty cool

ElinLiu0 commented 9 months ago

Is there a solution to this problem now? I still encounter this problem on gemma-7b.现在这个问题有解决办法吗?我在 gemma-7b 上仍然遇到这个问题。

Maybe try a lower model length should be fine,just keep watching the logs then makes the Q,K,V cache on your machine still remaining will your hosting your localized gemma.也许尝试较低的模型长度应该没问题,只需继续观察日志,然后使您计算机上的 Q,K,V 缓存仍然保留,以便托管您的本地化 Gemma。

The document states that the gemma-7b model is supported, and many other large models are supported. Is it because of the machine configuration? This is an RTX4090 desktop computer.文档指出支持gemma-7b模型,还支持很多其他大型模型。是机器配置的原因吗?这是一台 RTX4090 台式电脑。

No idea of that mate,i'm current using AliCloud Qwen1.5-7B-INT4,by seting model_length into 1024,it's working fine as expect.不知道那个伙伴,我目前使用阿里云 Qwen1.5-7B-INT4,通过将 model_length 设置为 1024,它按预期工作正常。

My guess is that the machine configuration is incorrect. image

image

抱歉我才看到你翻译中文,不好意思 002B9DB5

Nuclear6 commented 9 months ago

https://rahulschand.github.io/gpu_poor/

DsnTgr commented 9 months ago

Try to change gpu_memory_utilization=0.95 or 1.0 for vllm. Then it will run successfully.

Nuclear6 commented 8 months ago

gpu_memory_utilization

not work, Can you post the modified files and code?

DsnTgr commented 8 months ago

https://github.com/vllm-project/vllm/blob/24aecf421a4ad5989697010963074904fead9a1b/vllm/engine/arg_utils.py#L30 https://github.com/vllm-project/vllm/blob/24aecf421a4ad5989697010963074904fead9a1b/vllm/entrypoints/llm.py#L51 https://github.com/vllm-project/vllm/blob/24aecf421a4ad5989697010963074904fead9a1b/vllm/entrypoints/llm.py#L51

Code

from vllm import LLM, SamplingParams

llm = LLM(model="HuggingFaceH4/zephyr-7b-beta", gpu_memory_utilization=0.95)

...
DsnTgr commented 8 months ago

it is work that I run this model with huggingface or vllm in RTX4090. And I also use google/gemma-7b with hf to work successfully.

SafeyahShemali commented 8 months ago

Hello,

I used to use the same engine as follow: python -m vllm.entrypoints.openai.api_server --model="codellama/CodeLlama-13b-Instruct-hf" --tensor-parallel-size=2

With 2 NVIDIA L4 GPUs it now shows the same error: ValueError: The model's max seq len (16384) is larger than the maximum number of tokens that can be stored in KV cache (14528). Try increasing gpu_memory_utilization or decreasing max_model_len when initializing the engine.

Why and how should I return to the previous configuration setting? I already ran a set of experiences on the last configuration, and I must maintain the same.

DsnTgr commented 8 months ago

I see the code self.max_num_batched_tokens = max(max_model_len, 2048) from https://github.com/vllm-project/vllm/blob/e221910e77087743a50560e4ae69c3c2a12beb53/vllm/config.py#L486 and "model_max_length": 1000000000000000019884624838656, from https://huggingface.co/codellama/CodeLlama-13b-hf/blob/main/tokenizer_config.json

Maybe you changed the max_model_len like https://github.com/vllm-project/vllm/issues/322#issuecomment-1874997867, but I'm not sure.

SafeyahShemali commented 8 months ago

I am unsure if this would suit me as I need to keep the engine setting the same for the whole experiment.

Could anyone clarify this point if this trick won't change the model performance (inference part)?

silvacarl2 commented 7 months ago

is max_model_len=2048arbitrary or just simple the max number of tokens i cen expect to inference?

silvacarl2 commented 7 months ago

So is max_model_len best to be set to the maximum number of tokens i may need to inference?

Hzzhang-nlp commented 7 months ago

same error..

set gpu_memory_utilization=0.75 and low max_model_len ,

but resp is too short...

I set it to 0.8, and the problem was solved,it's like this: image

deeshantk commented 7 months ago

I am using the following code.

llm = VLLM(
                vllm_kwargs={"quantization": "awq"},
                max_model_len=30624,
                model=TheBloke/Mistral-7B-Instruct-v0.2-AWQ,
                # gpu_memory_utilization=1.0,
                trust_remote_code=True,  # mandatory for hf models
                max_new_tokens=512,
                speculative_max_model_len = 30624,
                top_k=40,
                top_p=0.95,
                temperature=0.7,
                repetition_penalty= 1.1,
            )

Getting the issue:

lm = VLLM(
  File "/home/ubuntu/isolated_product_description/ipd/lib/python3.8/site-packages/langchain_core/load/serializable.py", line 120, in __init__
    super().__init__(**kwargs)
  File "/home/ubuntu/isolated_product_description/ipd/lib/python3.8/site-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for VLLM
__root__
  The model's max seq len (32768) is larger than the maximum number of tokens that can be stored in KV cache (32624). Try increasing `gpu_memory_utilization` or decreasing `max_model_len` when initializing the engine. (type=value_error)

I have defined max_model_len < my KV cache but it still gives the same issue.

I am open to make changes in loading parameters. Can anyone tell what can be done here?

chintanckg commented 5 months ago

What is the "max_model_len" equivalent argument while initializing LLM class of vllm.

LLM(max_model_len=2048) doesn't seem to work; there must be some other argument!

steve2972 commented 4 months ago

Same exception with ValueError: The model's max seq len (2048) is larger than the maximum number of tokens that can be stored in KV cache (176). Try increasinggpu_memory_utilizationor decreasingmax_model_lenwhen initializing the engine.

Set max_model_len< KV cache. It works.

Is there a way to programmatically get the max number of tokens that can be stored in KV cache before running the model? Or do you have to run it at least once to get the value?

strongliu110 commented 4 months ago

How to increase kvcache?

steve2972 commented 4 months ago

Same exception with ValueError: The model's max seq len (2048) is larger than the maximum number of tokens that can be stored in KV cache (176). Try increasinggpu_memory_utilizationor decreasingmax_model_lenwhen initializing the engine.

Set max_model_len< KV cache. It works.

Is there a way to programmatically get the max number of tokens that can be stored in KV cache before running the model? Or do you have to run it at least once to get the value?

Oh never mind I found the code. https://github.com/vllm-project/vllm/blob/5ed3505d827658fe4f71f30fecf93a66baabfe26/vllm/worker/worker.py#L179-L204 Apparently vllm profile runs the model in order to find the available kv cache.

xiangru2020 commented 3 months ago

I wrote fixed value max_model_len.

vllm/config.py: 104 # self.max_model_len = _get_and_verify_max_len(self.hf_config, # max_model_len) self.max_model_len = 4096

Thanks, it works! Do you know how to set this in the parameters of your own scripts instead of changing the official scripts?

markVaykhansky commented 2 months ago

On A10G with a single GPU, tried running Llama-3.1-8b-Instruct with vLLM with the following configuration:

      "enable_chunked_prefill": false,
      "enable_prefix_caching": true,
      "kv_cache_dtype": "auto"

And got the following error: ValueError: The model's max seq len (131072) is larger than the maximum number of tokens that can be stored in KV cache (29136). Try increasinggpu_memory_utilizationor decreasingmax_model_lenwhen initializing the engine.

Then I changed to the following configuration:

      "enable_chunked_prefill": false,
      "enable_prefix_caching": true,
      "kv_cache_dtype": "auto",
      "max_model_len": 25600

And got this error:

`ValueError: The model's max seq len (25600) is larger than the maximum number of tokens that can be stored in KV cache (14592). Try increasing `gpu_memory_utilization` or decreasing `max_model_len` when initializing the engine.

Why did the KV cache size change?

youkaichao commented 2 months ago

@markVaykhansky chunked prefill is enabled by default for llama 3.1 , but when you change max_model_len to 25600, it will be disabled, and the memory reserved changed.

OfficerChul commented 3 weeks ago

i am haivng this problem with this: python -m vllm.entrypoints.openai.api_server --model abacusai/Smaug-72B-v0.1 --tensor-parallel-size 4 --trust-remote-code --gpu-memory-utilization 0.9 --host 0.0.0.0 --port 9002 but we get this: ValueError: The model's max seq len (32768) is larger than the maximum number of tokens that can be stored in KV cache (8512). Try increasing gpu_memory_utilization or decreasing max_model_len when initializing the engine. is there a work around to launch this form the command line?

Yes, it looks like you can add --max_model_len 4096 to your command.

https://github.com/vllm-project/vllm/blob/e433c115bce2bf27f7b1abdde7029566007d9eee/vllm/engine/arg_utils.py#L22

this solved my problem. I got the same issue with using RTX 4090 24GB. I changed the None part to 4096 since LLaMA only supports 4096 tokens.