vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
26.35k stars 3.86k forks source link

[Bug]: VLM Streaming does not output CompletionUsage #6705

Open epark001 opened 1 month ago

epark001 commented 1 month ago

Your current environment

PyTorch version: 2.3.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.1
Libc version: glibc-2.35

Python version: 3.10.9 (main, Nov 14 2023, 16:04:51) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.14.0-229.el9.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 545.23.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Address sizes:                   48 bits physical, 48 bits virtual
Byte Order:                      Little Endian
CPU(s):                          96
On-line CPU(s) list:             0-95
Vendor ID:                       AuthenticAMD
Model name:                      AMD EPYC 7413 24-Core Processor
CPU family:                      25
Model:                           1
Thread(s) per core:              2
Core(s) per socket:              24
Socket(s):                       2
Stepping:                        1
BogoMIPS:                        5289.93
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Virtualization:                  AMD-V
L1d cache:                       1.5 MiB (48 instances)
L1i cache:                       1.5 MiB (48 instances)
L2 cache:                        24 MiB (48 instances)
L3 cache:                        256 MiB (8 instances)
NUMA node(s):                    8
NUMA node0 CPU(s):               0-5,48-53
NUMA node1 CPU(s):               6-11,54-59
NUMA node2 CPU(s):               12-17,60-65
NUMA node3 CPU(s):               18-23,66-71
NUMA node4 CPU(s):               24-29,72-77
NUMA node5 CPU(s):               30-35,78-83
NUMA node6 CPU(s):               36-41,84-89
NUMA node7 CPU(s):               42-47,90-95
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Not affected
Vulnerability Retbleed:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] torch==2.3.1+cu118
[pip3] torchvision==0.18.1+cu118
[pip3] transformers==4.43.1
[pip3] triton==2.3.1
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.3
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      18-23,66-71     3               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

🐛 Describe the bug

When streaming chat completions for VLM (I am using phi-3-vision) no CompletionUsage is returned in the last chunk, but normal non-streaming chat completions outputs the CompletionUsage token counts. Normal:

ChatCompletion(id='chat-93e005279a214cab81d944dd05497390', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content="-----------------", role='assistant', function_call=None, tool_calls=[]), stop_reason=None)], created=1721769443, model='/llm/models/microsoft/phi-3-vision-128k-instruct', object='chat.completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=320, prompt_tokens=2536, total_tokens=2856))

Streaming

ChatCompletionChunk(id='chat-564aab95833e4d22bb723367ff93567a', choices=[Choice(delta=ChoiceDelta(content='', function_call=None, role=None, tool_calls=None), finish_reason='stop', index=0, logprobs=None, stop_reason=None)], created=1721768915, model='/llm/models/microsoft/phi-3-vision-128k-instruct', object='chat.completion.chunk', system_fingerprint=None)

Normal LLM returns the CompletionUsage data in the last chunk:

ChatCompletionChunk(id='cmpl-6d7d3763c24c4c1c9e3b22d4d5181c0d', choices=[Choice(delta=ChoiceDelta(content='', function_call=None, role=None, tool_calls=None), finish_reason='stop', index=0, logprobs=None)], created=22906950, model='/llm/models/open-orca/mistral-7b-openorca-awq', object='chat.completion.chunk', system_fingerprint=None, usage={'prompt_tokens': 23, 'total_tokens': 118, 'completion_tokens': 95})
DarkLight1337 commented 1 month ago

Normal LLM returns the CompletionUsage data in the last chunk:

Based on the request ID, it seems that you are using the Completions API rather than Chat Completions API for normal LLM. To better narrow down the issue, can you try using Chat Completions API for normal LLM as well?

tdoublep commented 1 month ago

@epark001 I think you need to include stream_options including include_usage=True when you send the request, since that is not enabled by default in vLLM.

Here is an example via the openai python client:

from openai import OpenAI

# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"

client = OpenAI(
    # defaults to os.environ.get("OPENAI_API_KEY")
    api_key=openai_api_key,
    base_url=openai_api_base,
)

models = client.models.list()
model = models.data[0].id

chat_completion = client.chat.completions.create(
    messages=[{
        "role": "system",
        "content": "You are a helpful assistant."
    }, {
        "role": "user",
        "content": "Who won the world series in 2020?"
    }, {
        "role":
        "assistant",
        "content":
        "The Los Angeles Dodgers won the World Series in 2020."
    }, {
        "role": "user",
        "content": "Where was it played?"
    }],
    model=model,
    max_tokens=10,
    stream=True,
    stream_options={
        'include_usage': True,
    }
)

for c in chat_completion:
    print(c)

produces:

ChatCompletionChunk(id='chat-0a9c17690faf42f899bad7ecf0d61d30', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, role='assistant', tool_calls=None), finish_reason=None, index=0, logprobs=None)], created=1721845881, model='facebook/opt-125m', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chat-0a9c17690faf42f899bad7ecf0d61d30', choices=[Choice(delta=ChoiceDelta(content='The', function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None)], created=1721845881, model='facebook/opt-125m', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chat-0a9c17690faf42f899bad7ecf0d61d30', choices=[Choice(delta=ChoiceDelta(content=' only', function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None)], created=1721845881, model='facebook/opt-125m', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chat-0a9c17690faf42f899bad7ecf0d61d30', choices=[Choice(delta=ChoiceDelta(content=' thing', function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None)], created=1721845881, model='facebook/opt-125m', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chat-0a9c17690faf42f899bad7ecf0d61d30', choices=[Choice(delta=ChoiceDelta(content=' I', function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None)], created=1721845881, model='facebook/opt-125m', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chat-0a9c17690faf42f899bad7ecf0d61d30', choices=[Choice(delta=ChoiceDelta(content=' don', function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None)], created=1721845881, model='facebook/opt-125m', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chat-0a9c17690faf42f899bad7ecf0d61d30', choices=[Choice(delta=ChoiceDelta(content="'t", function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None)], created=1721845881, model='facebook/opt-125m', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chat-0a9c17690faf42f899bad7ecf0d61d30', choices=[Choice(delta=ChoiceDelta(content=' understand', function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None)], created=1721845881, model='facebook/opt-125m', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chat-0a9c17690faf42f899bad7ecf0d61d30', choices=[Choice(delta=ChoiceDelta(content=' is', function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None)], created=1721845881, model='facebook/opt-125m', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chat-0a9c17690faf42f899bad7ecf0d61d30', choices=[Choice(delta=ChoiceDelta(content=' why', function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None)], created=1721845881, model='facebook/opt-125m', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chat-0a9c17690faf42f899bad7ecf0d61d30', choices=[Choice(delta=ChoiceDelta(content=' it', function_call=None, role=None, tool_calls=None), finish_reason='length', index=0, logprobs=None, stop_reason=None)], created=1721845881, model='facebook/opt-125m', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chat-0a9c17690faf42f899bad7ecf0d61d30', choices=[], created=1721845881, model='facebook/opt-125m', object='chat.completion.chunk', system_fingerprint=None, usage=CompletionUsage(completion_tokens=10, prompt_tokens=34, total_tokens=44))

The usage is there as expected.

RoyceMathews commented 3 weeks ago

One thing to note is that this does look like a regression in default behavior from vllm v0.4.3 to v0.5.0

It seems like the default behavior was to include it in the last chunk in v0.4.3. In v0.5.0 there is an additional chunk with the usage if stream_options is set.

Within vllm/entrypoints/openai/serving_completion.py#completion_stream_generator and serving_chat.py#chat_completion_stream_generator

On the left is v0.5.0, the right has v0.4.3 image

Would it make sense to reinstate this default behavior? Or add an environment variable that would allow it? This difference in default behavior is impacting me and my team.

DarkLight1337 commented 3 weeks ago

One thing to note is that this does look like a regression in default behavior from vllm v0.4.3 to v0.5.0

It seems like the default behavior was to include it in the last chunk in v0.4.3. In v0.5.0 there is an additional chunk with the usage if stream_options is set.

Within vllm/entrypoints/openai/serving_completion.py#completion_stream_generator and serving_chat.py#chat_completion_stream_generator

On the left is v0.5.0, the right has v0.4.3 image

Would it make sense to reinstate this default behavior? Or add an environment variable that would allow it? This difference in default behavior is impacting me and my team.

Does setting include_usage=True work for you? Btw, this behaviour is specified by OpenAI, so we shouldn't deviate from it.

RoyceMathews commented 3 weeks ago

Setting it in the request headers or when using the OpenAI Python SDK does work.

Totally understand wanting to follow the OpenAI spec.

Our team's issue is that we sit in-between our customers using the SDK and the instances of vllm we host on prem. We were using the usage token count for rate limiting purposes, and would rather not inject include_usage into our customer's requests.