vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
26.71k stars 3.91k forks source link

[Usage]: How to use beam search when request OpenAI Completions API #6057

Closed nguyenhoanganh2002 closed 5 days ago

nguyenhoanganh2002 commented 2 months ago

Your current environment

PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.29.3
Libc version: glibc-2.31

Python version: 3.10.14 (main, May  6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-187-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 10.1.243
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB

Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Byte Order:                         Little Endian
Address sizes:                      40 bits physical, 57 bits virtual
CPU(s):                             52
On-line CPU(s) list:                0-51
Thread(s) per core:                 1
Core(s) per socket:                 52
Socket(s):                          1
NUMA node(s):                       1
Vendor ID:                          GenuineIntel
CPU family:                         6
Model:                              134
Model name:                         Intel Xeon Processor (Icelake)
Stepping:                           0
CPU MHz:                            2194.848
BogoMIPS:                           4389.69
Virtualisation:                     VT-x
Hypervisor vendor:                  KVM
Virtualisation type:                full
L1d cache:                          1.6 MiB
L1i cache:                          1.6 MiB
L2 cache:                           208 MiB
L3 cache:                           16 MiB
NUMA node0 CPU(s):                  0-51
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed:             Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Vulnerable, KVM SW loop
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid md_clear arch_capabilities

Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] flake8-bugbear==24.4.26
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.0
[pip3] torchtext==0.18.0
[pip3] torchvision==0.18.0
[pip3] transformers==4.40.2
[pip3] triton==2.3.0
[conda] numpy                     1.23.5                   pypi_0    pypi
[conda] nvidia-nccl-cu12          2.20.5                   pypi_0    pypi
[conda] torch                     2.3.0                    pypi_0    pypi
[conda] torchtext                 0.18.0                   pypi_0    pypi
[conda] torchvision               0.18.0                   pypi_0    pypi
[conda] transformers              4.40.2                   pypi_0    pypi
[conda] triton                    2.3.0                    pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.3
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV12    0-51    0               N/A
GPU1    NV12     X      0-51    0               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

How would you like to use vllm

How to use beam search when request OpenAI Completions API

I tried:

from openai import OpenAI
client = OpenAI(
    base_url="http://localhost:8080/v1",
    api_key="xxx",
)

# pr = """Generate a short love story (50-100 words)"""
pr = "Hello"

messages.append({"role": "user", "content": pr})

completion = client.chat.completions.create(
  model="Qwen2-7B-Instruct",
  messages=messages,
  max_tokens=256,
  top_p=0.8,
  use_beam_search=True
)
messages.append({"role": "assistant", "content": completion.choices[0].message.content})
print(completion.choices[0].message.content)

Got error:

{
    "name": "TypeError",
    "message": "Completions.create() got an unexpected keyword argument 'use_beam_search'",
    "stack": "---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[4], line 6
      2 pr = \"Hello\"
      4 messages.append({\"role\": \"user\", \"content\": pr})
----> 6 completion = client.chat.completions.create(
      7   model=\"Qwen2-7B-Instruct\",
      8   messages=messages,
      9   max_tokens=256,
     10   top_p=0.8,
     11   use_beam_search=True
     12 )
     13 messages.append({\"role\": \"assistant\", \"content\": completion.choices[0].message.content})
     14 print(completion.choices[0].message.content)

File ~/miniconda3/envs/commonenv/lib/python3.10/site-packages/openai/_utils/_utils.py:277, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
    275             msg = f\"Missing required argument: {quote(missing[0])}\"
    276     raise TypeError(msg)
--> 277 return func(*args, **kwargs)

TypeError: Completions.create() got an unexpected keyword argument 'use_beam_search'"
}
nguyenhoanganh2002 commented 2 months ago

I deployed LLM by docker-compose

--served-model-name ${LLM_MODEL_NAME} --model /root/.cache/huggingface/hub/qwen2-vien-ed --dtype bfloat16 --host 0.0.0.0 --port ${LLM_PORT} --api-key ${LLM_API_KEY} --max-model-len 4096 --gpu-memory-utilization 0.8
mhillebrand commented 2 weeks ago

You need to use extra_body when specifying extra parameters that vLLM sprinkled on top of the OpenAI API.

completion = client.chat.completions.create(
  model="Qwen2-7B-Instruct",
  messages=messages,
  max_tokens=256,
  top_p=0.8,
  extra_body={'use_beam_search': True}
)
nguyenhoanganh2002 commented 5 days ago

You need to use extra_body when specifying extra parameters that vLLM sprinkled on top of the OpenAI API.

completion = client.chat.completions.create(
  model="Qwen2-7B-Instruct",
  messages=messages,
  max_tokens=256,
  top_p=0.8,
  extra_body={'use_beam_search': True}
)

Thanks a lot.