Open khluu opened 3 months ago
same problem happen to me. Is this bug in progress?
@DeJoker do you also see it in unit test or other places? How are you running it?
This issue on Spec decoding tests should be fixed already
@khluu I don't have a demo right now that can at least reproduce the problem.
Just same issue with flash_attn_cuda.fwd_kvcache
.
the situation is vllm start in nvidia triton server(nvcr.io/nvidia/tritonserver:24.05-vllm-python-py3), then send request directly with grpc client
My environment setup:
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.29.3
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.10.134-13.1.al8.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
BogoMIPS: 5800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 96 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-63
NUMA node1 CPU(s): 64-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.0
[pip3] transformers==4.41.0
[pip3] triton==2.3.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.0.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity
GPU0 X NV12 NV12 NV12 NV12 NV12 NV12 NV12 0-127 0-1
GPU1 NV12 X NV12 NV12 NV12 NV12 NV12 NV12 0-127 0-1
GPU2 NV12 NV12 X NV12 NV12 NV12 NV12 NV12 0-127 0-1
GPU3 NV12 NV12 NV12 X NV12 NV12 NV12 NV12 0-127 0-1
GPU4 NV12 NV12 NV12 NV12 X NV12 NV12 NV12 0-127 0-1
GPU5 NV12 NV12 NV12 NV12 NV12 X NV12 NV12 0-127 0-1
GPU6 NV12 NV12 NV12 NV12 NV12 NV12 X NV12 0-127 0-1
GPU7 NV12 NV12 NV12 NV12 NV12 NV12 NV12 X 0-127 0-1
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
error message with:
INFO 06-18 08:04:36 metrics.py:341] Avg prompt throughput: 17673.7 tokens/s, Avg generation throughput: 204.0 tokens/s, Running: 233 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 7.8%, CPU KV cache usage: 0.0%.
INFO 06-18 08:04:41 metrics.py:341] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 313.0 tokens/s, Running: 190 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 7.4%, CPU KV cache usage: 0.0%.
ERROR 06-18 08:04:44 async_llm_engine.py:52] Engine background task failed
ERROR 06-18 08:04:44 async_llm_engine.py:52] Traceback (most recent call last):
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 42, in _log_task_completion
ERROR 06-18 08:04:44 async_llm_engine.py:52] return_value = task.result()
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 532, in run_engine_loop
ERROR 06-18 08:04:44 async_llm_engine.py:52] has_requests_in_progress = await asyncio.wait_for(
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
ERROR 06-18 08:04:44 async_llm_engine.py:52] return fut.result()
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 506, in engine_step
ERROR 06-18 08:04:44 async_llm_engine.py:52] request_outputs = await self.engine.step_async()
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 235, in step_async
ERROR 06-18 08:04:44 async_llm_engine.py:52] output = await self.model_executor.execute_model_async(
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/vllm/executor/gpu_executor.py", line 117, in execute_model_async
ERROR 06-18 08:04:44 async_llm_engine.py:52] output = await make_async(self.driver_worker.execute_model
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
ERROR 06-18 08:04:44 async_llm_engine.py:52] result = self.fn(*self.args, **self.kwargs)
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
ERROR 06-18 08:04:44 async_llm_engine.py:52] return func(*args, **kwargs)
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 280, in execute_model
ERROR 06-18 08:04:44 async_llm_engine.py:52] output = self.model_runner.execute_model(seq_group_metadata_list,
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
ERROR 06-18 08:04:44 async_llm_engine.py:52] return func(*args, **kwargs)
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 749, in execute_model
ERROR 06-18 08:04:44 async_llm_engine.py:52] hidden_states = model_executable(
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 06-18 08:04:44 async_llm_engine.py:52] return self._call_impl(*args, **kwargs)
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 06-18 08:04:44 async_llm_engine.py:52] return forward_call(*args, **kwargs)
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2.py", line 330, in forward
ERROR 06-18 08:04:44 async_llm_engine.py:52] hidden_states = self.model(input_ids, positions, kv_caches,
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 06-18 08:04:44 async_llm_engine.py:52] return self._call_impl(*args, **kwargs)
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 06-18 08:04:44 async_llm_engine.py:52] return forward_call(*args, **kwargs)
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2.py", line 254, in forward
ERROR 06-18 08:04:44 async_llm_engine.py:52] hidden_states, residual = layer(
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 06-18 08:04:44 async_llm_engine.py:52] return self._call_impl(*args, **kwargs)
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 06-18 08:04:44 async_llm_engine.py:52] return forward_call(*args, **kwargs)
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2.py", line 206, in forward
ERROR 06-18 08:04:44 async_llm_engine.py:52] hidden_states = self.self_attn(
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 06-18 08:04:44 async_llm_engine.py:52] return self._call_impl(*args, **kwargs)
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 06-18 08:04:44 async_llm_engine.py:52] return forward_call(*args, **kwargs)
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2.py", line 153, in forward
ERROR 06-18 08:04:44 async_llm_engine.py:52] attn_output = self.attn(q, k, v, kv_cache, attn_metadata)
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 06-18 08:04:44 async_llm_engine.py:52] return self._call_impl(*args, **kwargs)
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 06-18 08:04:44 async_llm_engine.py:52] return forward_call(*args, **kwargs)
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/vllm/attention/layer.py", line 89, in forward
ERROR 06-18 08:04:44 async_llm_engine.py:52] return self.impl.forward(query, key, value, kv_cache, attn_metadata,
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/vllm/attention/backends/flash_attn.py", line 355, in forward
ERROR 06-18 08:04:44 async_llm_engine.py:52] output[num_prefill_tokens:] = flash_attn_with_kvcache(
ERROR 06-18 08:04:44 async_llm_engine.py:52] File "/usr/local/lib/python3.10/dist-packages/vllm_flash_attn/flash_attn_interface.py", line 1233, in flash_attn_with_kvcache
ERROR 06-18 08:04:44 async_llm_engine.py:52] out, softmax_lse = flash_attn_cuda.fwd_kvcache(
ERROR 06-18 08:04:44 async_llm_engine.py:52] RuntimeError: CUDA error: an illegal memory access was encountered
ERROR 06-18 08:04:44 async_llm_engine.py:52] CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
ERROR 06-18 08:04:44 async_llm_engine.py:52] For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
ERROR 06-18 08:04:44 async_llm_engine.py:52] Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
ERROR 06-18 08:04:44 async_llm_engine.py:52]
Exception in callback _log_task_completion(error_callback=<bound method...7eff2e47e500>>)(<Task finishe...sertions.\n')>) at /usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py:32
handle: <Handle _log_task_completion(error_callback=<bound method...7eff2e47e500>>)(<Task finishe...sertions.\n')>) at /usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py:32>
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 42, in _log_task_completion
return_value = task.result()
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 532, in run_engine_loop
has_requests_in_progress = await asyncio.wait_for(
File "/usr/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
return fut.result()
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 506, in engine_step
request_outputs = await self.engine.step_async()
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 235, in step_async
output = await self.model_executor.execute_model_async(
File "/usr/local/lib/python3.10/dist-packages/vllm/executor/gpu_executor.py", line 117, in execute_model_async
output = await make_async(self.driver_worker.execute_model
File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 280, in execute_model
output = self.model_runner.execute_model(seq_group_metadata_list,
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 749, in execute_model
hidden_states = model_executable(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2.py", line 330, in forward
hidden_states = self.model(input_ids, positions, kv_caches,
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2.py", line 254, in forward
hidden_states, residual = layer(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2.py", line 206, in forward
hidden_states = self.self_attn(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2.py", line 153, in forward
attn_output = self.attn(q, k, v, kv_cache, attn_metadata)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/attention/layer.py", line 89, in forward
return self.impl.forward(query, key, value, kv_cache, attn_metadata,
File "/usr/local/lib/python3.10/dist-packages/vllm/attention/backends/flash_attn.py", line 355, in forward
output[num_prefill_tokens:] = flash_attn_with_kvcache(
File "/usr/local/lib/python3.10/dist-packages/vllm_flash_attn/flash_attn_interface.py", line 1233, in flash_attn_with_kvcache
out, softmax_lse = flash_attn_cuda.fwd_kvcache(
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 54, in _log_task_completion
raise AsyncEngineDeadError(
vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for theactual cause.
I0618 08:04:44.709818 1084 model.py:368] "[vllm] Error generating stream: CUDA error: an illegal memory access was encountered\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\n"
I0618 08:04:44.710252 1084 model.py:368] "[vllm] Error generating stream: CUDA error: an illegal memory access was encountered\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\n"
I get the same error. When I set the max_num_seqs=20, the error appears. When I set he max_num_seqs=18, everything goes well. It seems like a kind of memory overflow? BTW, my gpu is H20 and the code runs well on my H800 machine.
My environment setup
1st environment (running on ec2
g6.4xlarge
)2nd environment (running on GCP
g2-standard-12
):docker build --build-arg max_jobs=16 --tag vllm --target test .
docker run -it --rm --gpus all vllm bash -c "cd /vllm-workspace/tests && pytest -v -s spec_decode"
🐛 Describe the bug
Nothing changes in the tests/relevant code. The only difference is it's running in a different machine/environment compared to vLLM CI. I listed 2 environments which I tried and both failed.
The error showed when running this test in
tests/spec_decode/e2e/test_multistep_correctness.py
:Test name is
test_spec_decode_e2e_greedy_correctness_tiny_model_large_bs_diff_output_len[1-32-256-test_llm_kwargs0-baseline_llm_kwargs0-per_test_common_llm_kwargs1-common_llm_kwargs0]
kwargs={'enforce_eager': True, 'use_v2_block_manager': True, 'model': 'JackFram/llama-160m', 'speculative_model': 'JackFram/llama-68m', 'num_speculative_tokens': 5}
Failure message and stack trace starts here: https://buildkite.com/vllm/ci-aws/builds/82#018fcb54-3ae6-4a96-8e2a-67c66814003d/184-356
The error happens when
flash_attn_cuda.fwd_kvcache
is called in/attention/backends/flash_attn.py
Running the test with
VLLM_ATTENTION_BACKEND=XFORMERS
passes. Could this bug be related to flash attention?