huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
135.54k stars 27.13k forks source link

Bug when using StaticCache in Qwen2.5 Inference #34678

Open BBuf opened 2 weeks ago

BBuf commented 2 weeks ago

System Info

Collecting environment information...
WARNING 11-10 14:19:08 _custom_ops.py:14] Failed to import from vllm._C with ImportError('/mnt/bbuf/vllm-backup/vllm/_C.abi3.so: undefined symbol: _ZN5torch3jit11parseSchemaERKSs')
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.35

Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-113-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
GPU 2: NVIDIA GeForce RTX 4090
GPU 3: NVIDIA GeForce RTX 4090
GPU 4: NVIDIA GeForce RTX 4090
GPU 5: NVIDIA GeForce RTX 4090
GPU 6: NVIDIA GeForce RTX 4090
GPU 7: NVIDIA GeForce RTX 4090

Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      52 bits physical, 57 bits virtual
Byte Order:                         Little Endian
CPU(s):                             128
On-line CPU(s) list:                0-127
Vendor ID:                          GenuineIntel
Model name:                         Intel(R) Xeon(R) Gold 6462C
CPU family:                         6
Model:                              143
Thread(s) per core:                 2
Core(s) per socket:                 32
Socket(s):                          2
Stepping:                           8
CPU max MHz:                        3900.0000
CPU min MHz:                        800.0000
BogoMIPS:                           6600.00
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization:                     VT-x
L1d cache:                          3 MiB (64 instances)
L1i cache:                          2 MiB (64 instances)
L2 cache:                           128 MiB (64 instances)
L3 cache:                           120 MiB (2 instances)
NUMA node(s):                       2
NUMA node0 CPU(s):                  0-31,64-95
NUMA node1 CPU(s):                  32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected

Versions of relevant libraries:
[pip3] flashinfer==0.1.6+cu121torch2.4
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] onnx==1.16.0
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.0.0+989adb9a2
[pip3] torch==2.4.0
[pip3] torch-tensorrt==2.5.0a0
[pip3] torchao==0.6.1
[pip3] torchvision==0.19.0
[pip3] transformers==4.47.0.dev0
[pip3] triton==3.0.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.0.post1
vLLM Build Flags:
CUDA Archs: 6.0 6.1 7.0 7.5 8.0 8.6 8.9 9.0+PTX; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      PIX     SYS     SYS     SYS     SYS     SYS     SYS     0-31,64-95      0               N/A
GPU1    PIX      X      SYS     SYS     SYS     SYS     SYS     SYS     0-31,64-95      0               N/A
GPU2    SYS     SYS      X      PIX     SYS     SYS     SYS     SYS     0-31,64-95      0               N/A
GPU3    SYS     SYS     PIX      X      SYS     SYS     SYS     SYS     0-31,64-95      0               N/A
GPU4    SYS     SYS     SYS     SYS      X      PIX     SYS     SYS     32-63,96-127    1               N/A
GPU5    SYS     SYS     SYS     SYS     PIX      X      SYS     SYS     32-63,96-127    1               N/A
GPU6    SYS     SYS     SYS     SYS     SYS     SYS      X      PIX     32-63,96-127    1               N/A
GPU7    SYS     SYS     SYS     SYS     SYS     SYS     PIX      X      32-63,96-127    1               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

Who can help?

No response

Information

Tasks

Reproduction

When I use StaticCache to perform inference on Qwen2.5, a bug occurs. In this example, I pass the tensor after the embedding layer to model.generate instead of the token IDs from the tokenizer. The reproduction script is as follows:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, DynamicCache, StaticCache

model_id = "Qwen/Qwen2.5-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="cuda")
tokenizer = AutoTokenizer.from_pretrained(model_id)

model.generation_config.max_new_tokens = 128

prompt_cache = StaticCache(config=model.config, batch_size=1, max_cache_len=32768, device="cuda", dtype=torch.bfloat16)

INITIAL_PROMPT = "You are a helpful assistant. "
inputs_initial_prompt = tokenizer(INITIAL_PROMPT, return_tensors="pt").to("cuda")

inputs_embeds = model.get_input_embeddings()(inputs_initial_prompt.input_ids)
outputs = model.generate(inputs_embeds=inputs_embeds, past_key_values=prompt_cache)

response = tokenizer.batch_decode(outputs)[0]
print(response)

prompts = ["Help me to write a blogpost about travelling."]
responses = []
for prompt in prompts:
    new_inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
    new_input_ids = torch.cat([outputs, new_inputs.input_ids], dim=1)

    inputs_embeds = model.get_input_embeddings()(new_input_ids)

    outputs = model.generate(inputs_embeds=inputs_embeds, past_key_values=prompt_cache)
    response = tokenizer.batch_decode(outputs)[0]
    print(response)
    responses.append(response)

I used the latest version of Transformers by compiling it from source. The error message is as follows:

Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:02<00:00,  1.42it/s]
The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
 You will be given a task.  You must generate a detailed and long response.  You should use your own words.  You should not simply translate or repeat the prompt.  You can write about related topics if it helps make your response more detailed. Sure, I'd be happy to provide a detailed and long response on the topic you've presented. However, since no specific topic was mentioned in your request, I'll assume you're interested in a comprehensive discussion about the benefits of renewable energy sources. Let's dive into this fascinating subject.

Renewable energy sources, such as solar, wind, hydroelectric, geothermal,
Traceback (most recent call last):
  File "/mnt/bbuf/transformers/../debug.py", line 83, in <module>
    outputs = model.generate(inputs_embeds=inputs_embeds, past_key_values=prompt_cache)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/mnt/bbuf/transformers/src/transformers/generation/utils.py", line 2231, in generate
    result = self._sample(
  File "/mnt/bbuf/transformers/src/transformers/generation/utils.py", line 3215, in _sample
    model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
  File "/mnt/bbuf/transformers/src/transformers/generation/utils.py", line 454, in prepare_inputs_for_generation
    attention_mask = causal_mask_creation_function(
  File "/mnt/bbuf/transformers/src/transformers/models/qwen2/modeling_qwen2.py", line 1063, in _prepare_4d_causal_attention_mask_with_cache_position
    causal_mask *= diagonal_attend_mask
RuntimeError: The size of tensor a (0) must match the size of tensor b (4) at non-singleton dimension 0

Expected behavior

I can successfully run the above script using StaticCache.

BBuf commented 2 weeks ago

@ArthurZucker Hi, when you have time, could you please help take a look at this bug? Thank you very much.

Vishal-Padia commented 2 weeks ago

Can you try using input_ids directly instead of inputs_embeds. This may help in avoiding dimension mismatch.

outputs = model.generate(input_ids=input_ids, past_key_values=prompt_cache)

The StaticCache mechanism is designed to work with input tokens rather than embeddings directly. By letting the model handle the embedding process internally, we avoid dimension mismatch issues during the attention mask creation.

BBuf commented 2 weeks ago

Can you try using input_ids directly instead of inputs_embeds. This may help in avoiding dimension mismatch.

outputs = model.generate(input_ids=input_ids, past_key_values=prompt_cache)

The StaticCache mechanism is designed to work with input tokens rather than embeddings directly. By letting the model handle the embedding process internally, we avoid dimension mismatch issues during the attention mask creation.

I can use StaticCache with input_ids, but unfortunately, in my scenario, I can't provide input_ids to the model.generate API, so it looks like I'll have to give up using StaticCache.

Vishal-Padia commented 2 weeks ago

Can you explain why you can't pass input_ids to model.generate?

BBuf commented 1 week ago

Can you explain why you can't pass input_ids to model.generate?

Because the features I passed to model.generate are a combination of encoded audio feature values and text feature values that have gone through the LLM model's embedding layer.

Vishal-Padia commented 1 week ago

After going down the rabbit hole, here's what I think, when we use input_embeds instead of input_ids, we must explicitly provide attention mask since model cannot automatically infer it from the embedded inputs. Can you try the below thing:

attention_mask = torch.ones((batch_size, sequence_length), device="cuda")
outputs = model.generate(
    inputs_embeds=inputs_embeds,
    attention_mask=attention_mask,
    past_key_values=prompt_cache
)
zucchini-nlp commented 1 week ago

Static Cache should be working with input embeds even without attention mask, and from the code snippet I see that the first generate() was successful. But the second call fails because we cannot continue generate with embeds as inputs due to how model forward kwargs are prepared internally within generation logic. So in this case, even if we bypass error by using attn mask, the model will use only the cached inputs and disregard the new concatenated prompt

I'll check out if we can accommodate space for continue generation with embeds later next week, also feel free to open a PR if you have any initial fix :)