vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
28.21k stars 4.17k forks source link

[Bug]: Unable to load the tokenizers of certain models #8994

Open Wafaa014 opened 1 week ago

Wafaa014 commented 1 week ago

Your current environment

The output of `python collect_env.py` ```text PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Red Hat Enterprise Linux 9.4 (Plow) (x86_64) GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3) Clang version: Could not collect CMake version: version 3.29.5 Libc version: glibc-2.34 Python version: 3.10.9 (main, Mar 8 2023, 10:47:38) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.14.0-427.31.1.el9_4.x86_64-x86_64-with-glibc2.34 Is CUDA available: False CUDA runtime version: 12.1.105 CUDA_MODULE_LOADING set to: N/A GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 128 On-line CPU(s) list: 0-127 Vendor ID: AuthenticAMD Model name: AMD EPYC 7H12 64-Core Processor CPU family: 23 Model: 49 Thread(s) per core: 1 Core(s) per socket: 64 Socket(s): 2 Stepping: 0 Frequency boost: disabled CPU(s) scaling MHz: 100% CPU max MHz: 2600.0000 CPU min MHz: 1500.0000 BogoMIPS: 5190.57 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca Virtualization: AMD-V L1d cache: 4 MiB (128 instances) L1i cache: 4 MiB (128 instances) L2 cache: 64 MiB (128 instances) L3 cache: 512 MiB (32 instances) NUMA node(s): 8 NUMA node0 CPU(s): 0-15 NUMA node1 CPU(s): 16-31 NUMA node2 CPU(s): 32-47 NUMA node3 CPU(s): 48-63 NUMA node4 CPU(s): 64-79 NUMA node5 CPU(s): 80-95 NUMA node6 CPU(s): 96-111 NUMA node7 CPU(s): 112-127 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT disabled Vulnerability Spec rstack overflow: Mitigation; SMT disabled Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.1.3.1 [pip3] nvidia-cuda-cupti-cu12==12.1.105 [pip3] nvidia-cuda-nvrtc-cu12==12.1.105 [pip3] nvidia-cuda-runtime-cu12==12.1.105 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.0.2.54 [pip3] nvidia-curand-cu12==10.3.2.106 [pip3] nvidia-cusolver-cu12==11.4.5.107 [pip3] nvidia-cusparse-cu12==12.1.0.106 [pip3] nvidia-ml-py==12.555.43 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.1.105 [pip3] pyzmq==26.0.2 [pip3] torch==2.4.0 [pip3] torchvision==0.19.0 [pip3] transformers==4.45.1 [pip3] triton==3.0.0 [conda] numpy 1.26.4 pypi_0 pypi [conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi [conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi [conda] nvidia-ml-py 12.555.43 pypi_0 pypi [conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi [conda] pyzmq 26.0.2 pypi_0 pypi [conda] torch 2.4.0 pypi_0 pypi [conda] torchvision 0.19.0 pypi_0 pypi [conda] transformers 4.45.1 pypi_0 pypi [conda] triton 3.0.0 pypi_0 pypi ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.6.3.dev50+gbe76e5aa.d20240930 vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: Could not collect ```

Model Input Dumps

No response

🐛 Describe the bug

I am trying to load llama-2-13B and alma-13B models and I am gettign errors related to the tokenizers. note that loading the 7B version of the models works fine. vllm and transformers versions that I am using are shown in the environment details.

Here is the log of the errors:

llama-2-13B

 model = LLM(model=name_path,seed=42,trust_remote_code=True,tensor_parallel_size=1)
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 214, in __init__
    self.llm_engine = LLMEngine.from_engine_args(
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 574, in from_engine_args
    engine = cls(
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 309, in __init__
    self.tokenizer = self._init_tokenizer()
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 618, in _init_tokenizer
    return init_tokenizer_from_configs(
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/transformers_utils/tokenizer_group/__init__.py", line 28, in init_tokenizer_from_configs
    return get_tokenizer_group(parallel_config.tokenizer_pool_config,
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/transformers_utils/tokenizer_group/__init__.py", line 49, in get_tokenizer_group
    return tokenizer_cls.from_config(tokenizer_pool_config, **init_kwargs)
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/transformers_utils/tokenizer_group/tokenizer_group.py", line 30, in from_config
    return cls(**init_kwargs)
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/transformers_utils/tokenizer_group/tokenizer_group.py", line 23, in __init__
    self.tokenizer = get_tokenizer(self.tokenizer_id, **tokenizer_config)
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/transformers_utils/tokenizer.py", line 140, in get_tokenizer
    raise e
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/transformers_utils/tokenizer.py", line 119, in get_tokenizer
    tokenizer = AutoTokenizer.from_pretrained(
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 907, in from_pretrained
    return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2216, in from_pretrained
    return cls._from_pretrained(
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2431, in _from_pretrained
    tokenizer_file_handle = json.load(tokenizer_file_handle)
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/json/__init__.py", line 293, in load
    return loads(fp.read(),
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 93391 column 2 (char 1700995)
Traceback (most recent call last):
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py", line 1592, in convert_slow_tokenizer
    ).converted()
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py", line 1489, in converted
    tokenizer = self.tokenizer()
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py", line 1482, in tokenizer
    vocab_scores, merges = self.extract_vocab_merges_from_model(self.vocab_file)
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py", line 1458, in extract_vocab_merges_from_model
    bpe_ranks = load_tiktoken_bpe(tiktoken_url)
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/tiktoken/load.py", line 148, in load_tiktoken_bpe
    return {
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/tiktoken/load.py", line 150, in <dictcomp>
    for token, rank in (line.split() for line in contents.splitlines() if line)
ValueError: not enough values to unpack (expected 2, got 1)

During handling of the above exception, another exception occurred:

alma-13B

   model = LLM(model=name_path,seed=42,trust_remote_code=True,tensor_parallel_size=1)
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 214, in __init__
    self.llm_engine = LLMEngine.from_engine_args(
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 574, in from_engine_args
    engine = cls(
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 309, in __init__
    self.tokenizer = self._init_tokenizer()
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 618, in _init_tokenizer
    return init_tokenizer_from_configs(
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/transformers_utils/tokenizer_group/__init__.py", line 28, in init_tokenizer_from_configs
    return get_tokenizer_group(parallel_config.tokenizer_pool_config,
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/transformers_utils/tokenizer_group/__init__.py", line 49, in get_tokenizer_group
    return tokenizer_cls.from_config(tokenizer_pool_config, **init_kwargs)
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/transformers_utils/tokenizer_group/tokenizer_group.py", line 30, in from_config
    return cls(**init_kwargs)
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/transformers_utils/tokenizer_group/tokenizer_group.py", line 23, in __init__
    self.tokenizer = get_tokenizer(self.tokenizer_id, **tokenizer_config)
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/transformers_utils/tokenizer.py", line 140, in get_tokenizer
    raise e
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/vllm/transformers_utils/tokenizer.py", line 119, in get_tokenizer
    tokenizer = AutoTokenizer.from_pretrained(
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 907, in from_pretrained
    return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2216, in from_pretrained
    return cls._from_pretrained(
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2450, in _from_pretrained
    tokenizer = cls(*init_inputs, **init_kwargs)
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 157, in __init__
    super().__init__(
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 138, in __init__
    fast_tokenizer = convert_slow_tokenizer(self, from_tiktoken=True)
  File "/home/wmohammed/.conda/envs/alti/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py", line 1594, in convert_slow_tokenizer
    raise ValueError(
ValueError: Converting from Tiktoken failed, if a converter for SentencePiece is available, provide a model path with a SentencePiece tokenizer.model file.Currently available slow->fast convertors: ['AlbertTokenizer', 'BartTokenizer', 'BarthezTokenizer', 'BertTokenizer', 'BigBirdTokenizer', 'BlenderbotTokenizer', 'CamembertTokenizer', 'CLIPTokenizer', 'CodeGenTokenizer', 'ConvBertTokenizer', 'DebertaTokenizer', 'DebertaV2Tokenizer', 'DistilBertTokenizer', 'DPRReaderTokenizer', 'DPRQuestionEncoderTokenizer', 'DPRContextEncoderTokenizer', 'ElectraTokenizer', 'FNetTokenizer', 'FunnelTokenizer', 'GPT2Tokenizer', 'HerbertTokenizer', 'LayoutLMTokenizer', 'LayoutLMv2Tokenizer', 'LayoutLMv3Tokenizer', 'LayoutXLMTokenizer', 'LongformerTokenizer', 'LEDTokenizer', 'LxmertTokenizer', 'MarkupLMTokenizer', 'MBartTokenizer', 'MBart50Tokenizer', 'MPNetTokenizer', 'MobileBertTokenizer', 'MvpTokenizer', 'NllbTokenizer', 'OpenAIGPTTokenizer', 'PegasusTokenizer', 'Qwen2Tokenizer', 'RealmTokenizer', 'ReformerTokenizer', 'RemBertTokenizer', 'RetriBertTokenizer', 'RobertaTokenizer', 'RoFormerTokenizer', 'SeamlessM4TTokenizer', 'SqueezeBertTokenizer', 'T5Tokenizer', 'UdopTokenizer', 'WhisperTokenizer', 'XLMRobertaTokenizer', 'XLNetTokenizer', 'SplinterTokenizer', 'XGLMTokenizer', 'LlamaTokenizer', 'CodeLlamaTokenizer', 'GemmaTokenizer', 'Phi3Tokenizer']

Before submitting a new issue...

kylesayrs commented 1 week ago

@Wafaa014 Could you please post the exact model=name_path you're using?

Wafaa014 commented 1 week ago

sure, it is:

name_path=meta-llama/Llama-2-13b-hf
name_path=haoranxu/ALMA-13B
insidesecurity-yhojann-aguilera commented 6 days ago

Is a transformers bug: https://github.com/huggingface/transformers/issues/33746

ArthurZucker commented 5 days ago

Are you sure you have tokenizers installed?

from transformers import AutoTokenizer
AutoTokenizer.from_pretrained('haoranxu/ALMA-13B')

has not issues for me unless I uninstall sentencepiece: https://huggingface.co/haoranxu/ALMA-13B/tree/main does not have a tokenizers.json. you need it for the conversion.

The error is missleading however

Wafaa014 commented 5 days ago

I do have sentencepiece and tokenizers installed. Can you please share the versions of:

ArthurZucker commented 4 days ago
image

(I don't have vllm installed) this is on git checkout v4.45.1

Wafaa014 commented 1 day ago

I have created a new env with those exact versions and I am still not able to load the models. I am able to locate the error to be coming from this file https://github.com/huggingface/transformers/blob/main/src/transformers/convert_slow_tokenizer.py In specific, the ValueError in line 1597 is being raised

ArthurZucker commented 1 day ago

Let's focus this on https://github.com/huggingface/transformers/issues/33746 I think this is an environnement issue!

teodororo commented 12 hours ago

I was with a similar issue in Databricks environment and solved it with: dbutils.library.restartPython()