vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
29.69k stars 4.48k forks source link

[Bug]: Qwen-14B-Chat-Int4 with guided_json error #3778

Open xunfeng1980 opened 7 months ago

xunfeng1980 commented 7 months ago

Your current environment

Collecting environment information...
PyTorch version: 2.1.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.0
Libc version: glibc-2.35

Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-26-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      48 bits physical, 48 bits virtual
Byte Order:                         Little Endian
CPU(s):                             32
On-line CPU(s) list:                0-31
Vendor ID:                          AuthenticAMD
Model name:                         AMD Ryzen 9 7950X 16-Core Processor
CPU family:                         25
Model:                              97
Thread(s) per core:                 2
Core(s) per socket:                 16
Socket(s):                          1
Stepping:                           2
CPU max MHz:                        5881.0000
CPU min MHz:                        400.0000
BogoMIPS:                           8982.54
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization:                     AMD-V
L1d cache:                          512 KiB (16 instances)
L1i cache:                          512 KiB (16 instances)
L2 cache:                           16 MiB (16 instances)
L3 cache:                           64 MiB (2 instances)
NUMA node(s):                       1
NUMA node0 CPU(s):                  0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Enhanced / Automatic IBRS, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.1.2
[pip3] triton==2.1.0
[conda] Could not collectROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.0
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X  0-31    0       N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks```

### 🐛 Describe the bug

## Run

docker run -it --rm -p 8000:8000 --gpus='"device=0"' --name qwen-int4-vllm -v /data/Qwen-14B-Chat-Int4:/model/Qwen-14B-Chat-Int4 --entrypoint=python3 vllm:0.4.0 -m vllm.entrypoints.openai.api_server --model /model/Qwen-14B-Chat-Int4 --trust-remote-code --quantization gptq


## Test

import json

import jsonschema import openai

client = openai.OpenAI( base_url="http://127.0.0.1:8000/v1", api_key="token-abc123", )

MODEL_NAME = '/model/Qwen-14B-Chat-Int4' TEST_SCHEMA = { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" }, "skills": { "type": "array", "items": { "type": "string", "maxLength": 10 }, "minItems": 3 }, "work history": { "type": "array", "items": { "type": "object", "properties": { "company": { "type": "string" }, "duration": { "type": "string" }, "position": { "type": "string" } }, "required": ["company", "position"] } } }, "required": ["name", "age", "skills", "work history"] } completion = client.completions.create( model=MODEL_NAME, prompt=f"Give an example JSON for an employee profile " f"that fits this schema: {TEST_SCHEMA}", n=3, temperature=1.0, max_tokens=500, extra_body=dict(guided_json=TEST_SCHEMA))

assert completion.id is not None assert completion.choices is not None and len(completion.choices) == 3 for i in range(3): assert completion.choices[i].text is not None print(completion.choices[i].text) output_json = json.loads(completion.choices[i].text) jsonschema.validate(instance=output_json, schema=TEST_SCHEMA)

## Error

File "/workspace/vllm/entrypoints/openai/serving_completion.py", line 128, in create_completion await get_guided_decoding_logits_processor( File "/workspace/vllm/model_executor/guided_decoding.py", line 76, in get_guided_decoding_logits_processor result = await loop.run_in_executor(global_thread_pool, File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, *self.kwargs) File "/workspace/vllm/model_executor/guided_decoding.py", line 123, in _get_cached_logits_processor return JSONLogitsProcessor(guide, tokenizer) File "/workspace/vllm/model_executor/guided_logits_processors.py", line 154, in init super().init(regex_string, tokenizer) File "/workspace/vllm/model_executor/guided_logits_processors.py", line 117, in init fsm = RegexFSM(regex_string, tokenizer) File "/usr/local/lib/python3.10/dist-packages/outlines/fsm/fsm.py", line 121, in init self.states_to_token_maps, self.empty_token_ids = create_states_mapping( File "/usr/local/lib/python3.10/dist-packages/outlines/caching.py", line 74, in wrapper result = cached_function(args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/outlines/fsm/fsm.py", line 104, in create_states_mapping states_to_token_maps, empty_token_ids = create_fsm_index_tokenizer( File "/usr/local/lib/python3.10/dist-packages/outlines/fsm/regex.py", line 571, in create_fsm_index_tokenizer vocabulary, empty_token_ids = reduced_vocabulary(tokenizer) File "/usr/local/lib/python3.10/dist-packages/outlines/fsm/regex.py", line 545, in reduced_vocabulary token_str = tokenizer.convert_token_to_string(token) File "/workspace/vllm/model_executor/guided_logits_processors.py", line 53, in convert_token_to_string if token.startswith(SPIECE_UNDERLINE) or token == "<0x20>": TypeError: startswith first arg must be bytes or a tuple of bytes, not str

john-adeojo commented 3 months ago

Anybody solved this issue yet?

Dong148 commented 3 months ago

Anybody solved this issue yet?

jasonkylelol commented 3 months ago

Anybody solved this issue yet?👀

sjzhou4 commented 2 months ago

Anybody solved this issue yet?👀