vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
26.73k stars 3.91k forks source link

[Bug]: No module named `jsonschema.protocols`. #6486

Open eff-kay opened 2 months ago

eff-kay commented 2 months ago

Your current environment

The output of `python collect_env.py`
Collecting environment information...
/home/ubuntu/.local/lib/python3.8/site-packages/torch/cuda/__init__.py:118: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11060). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.)
  return torch._C._cuda_getDeviceCount() > 0
WARNING 07-16 23:02:53 _custom_ops.py:14] Failed to import from vllm._C with ImportError("/lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /home/ubuntu/.local/lib/python3.8/site-packages/vllm/_C.abi3.so)")
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.3
Libc version: glibc-2.31

Python version: 3.8.10 (default, Jun 22 2022, 20:18:18)  [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1022-aws-x86_64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: 11.4.152
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA A10G
Nvidia driver version: 510.47.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   48 bits physical, 48 bits virtual
CPU(s):                          4
On-line CPU(s) list:             0-3
Thread(s) per core:              2
Core(s) per socket:              2
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       AuthenticAMD
CPU family:                      23
Model:                           49
Model name:                      AMD EPYC 7R32
Stepping:                        0
CPU MHz:                         2799.836
BogoMIPS:                        5599.67
Hypervisor vendor:               KVM
Virtualization type:             full
L1d cache:                       64 KiB
L1i cache:                       64 KiB
L2 cache:                        1 MiB
L3 cache:                        8 MiB
NUMA node0 CPU(s):               0-3
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Not affected
Vulnerability Retbleed:          Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid

Versions of relevant libraries:
[pip3] mypy==0.991
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.1
[pip3] torchvision==0.18.1
[pip3] transformers==4.42.4
[pip3] triton==2.3.1
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.2
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    CPU Affinity    NUMA Affinity
GPU0     X  0-3     N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

🐛 Describe the bug

No module named 'jsonschema.protocols'

did pip install vllm then ran the following command

python3 -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-8B-Instruct

ran into the following error.

WARNING 07-16 22:59:22 _custom_ops.py:14] Failed to import from vllm._C with ImportError("/lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /home/ubuntu/.local/lib/python3.8/site-packages/vllm/_C.abi3.so)")
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/ubuntu/.local/lib/python3.8/site-packages/vllm/entrypoints/openai/api_server.py", line 33, in <module>
    from vllm.entrypoints.openai.serving_chat import OpenAIServingChat
  File "/home/ubuntu/.local/lib/python3.8/site-packages/vllm/entrypoints/openai/serving_chat.py", line 28, in <module>
    from vllm.model_executor.guided_decoding import (
  File "/home/ubuntu/.local/lib/python3.8/site-packages/vllm/model_executor/guided_decoding/__init__.py", line 6, in <module>
    from vllm.model_executor.guided_decoding.lm_format_enforcer_decoding import (
  File "/home/ubuntu/.local/lib/python3.8/site-packages/vllm/model_executor/guided_decoding/lm_format_enforcer_decoding.py", line 15, in <module>
    from vllm.model_executor.guided_decoding.outlines_decoding import (
  File "/home/ubuntu/.local/lib/python3.8/site-packages/vllm/model_executor/guided_decoding/outlines_decoding.py", line 13, in <module>
    from vllm.model_executor.guided_decoding.outlines_logits_processors import (
  File "/home/ubuntu/.local/lib/python3.8/site-packages/vllm/model_executor/guided_decoding/outlines_logits_processors.py", line 24, in <module>
    from outlines.caching import cache
  File "/home/ubuntu/.local/lib/python3.8/site-packages/outlines/__init__.py", line 2, in <module>
    import outlines.generate
  File "/home/ubuntu/.local/lib/python3.8/site-packages/outlines/generate/__init__.py", line 6, in <module>
    from .json import json
  File "/home/ubuntu/.local/lib/python3.8/site-packages/outlines/generate/json.py", line 7, in <module>
    from outlines.fsm.json_schema import build_regex_from_schema, get_schema_from_signature
  File "/home/ubuntu/.local/lib/python3.8/site-packages/outlines/fsm/json_schema.py", line 7, in <module>
    from jsonschema.protocols import Validator
ModuleNotFoundError: No module named 'jsonschema.protocols'
seanchen commented 3 weeks ago

I got the same error and resolved it by upgrade jsonschema to latest version. jsonschema introduce the protocols module in version v4.19.0: https://github.com/python-jsonschema/jsonschema/releases/tag/v4.19.0

eff-kay commented 3 weeks ago

I got the same error and resolved it by upgrade jsonschema to latest version. jsonschema introduce the protocols module in version v4.19.0: https://github.com/python-jsonschema/jsonschema/releases/tag/v4.19.0

yes thats how I resolved it as well.