Open ChalvYongkang opened 1 month ago
I solve this problem by changing "with sdpa_kernel(SDPBackend.FLASH_ATTENTION)" (line 824 Allegro/allegro/models/transformers /block.py) to "with torch.backends.cuda.sdp_kernel(enable_flash=False, enable_math=True, enable_mem_efficient=True)" , which ensure flash_attention is false.
No there is not. Feel free to modify the attention processor
A new problem. It shows it requires 560.82 GiB to test after I change my code as shown above. And nothing has changed even though enable_cpu_offload is set to True.
File "/Allegro/allegro/models/transformers/block.py", line 826, in call hidden_states = F.scaled_dot_product_attention( torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 560.82 GiB. GPU 0 has a total capacity of 31.74 GiB of which 26.35 GiB is free. Process 2048906 has 5.38 GiB memory in use. Of the allocated memory 4.81 GiB is allocated by PyTorch, and 218.90 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
What? 560G? It seems some wired things appeared in V100. I remember I tested xformers on A100 and the memory cost remained the same. We don't have V100 and I'm afraid there's nothing I can do about it, unfortunately..
I found the issue. The V100 does not support bfloat16 precision, but it doesn't throw an error. The underlying implementation might default to some very complex computations. After I switched to float16 precision, it ran successfully, using 6 GiB on a single GPU. However, generating a result takes about 4 hours, so I guess I need to use faster GPUs. :)
How do you switch precision modes?
How do you switch precision modes? Just simply change the 13th line "dtype=torch. bfloat16" in single_inference. py to "dtype=torch. float16"
didn't work either way, but thank you anyways :)
I solve this problem by change "with sdpa_kernel(SDPBackend.FLASH_ATTENTION)" (line 824 Allegro/allegro/models/transformers /block.py) to "with torch.backends.cuda.sdp_kernel(enable_flash=False, enable_math=True, enable_mem_efficient=True)" , which ensure flash_attention is false.
This did work. But it is brutally slow (GTX 3090).
@Grownz that is literally unusable
3 hours on rtx 3090
@Grownz that is literally unusable
3 hours on rtx 3090
I know, i pointed that out, too.
@Grownz do you think that can be speed up somehow? or we have to wait rtx 5090 :D
I don't think this is due to low raw performance, but due to unsupported attention modes (to dive deeper: https://developer.nvidia.com/blog/emulating-the-attention-mechanism-in-transformer-models-with-a-fully-convolutional-network/). This might be solved via updated drivers, but since nvidia doesn't care much about ML on consumer hardware, i doubt there will be an immediate official solution.
@Grownz so again it is related to shameless monopoly nvidia :( ty
I solve this problem by change "with sdpa_kernel(SDPBackend.FLASH_ATTENTION)" (line 824 Allegro/allegro/models/transformers /block.py) to "with torch.backends.cuda.sdp_kernel(enable_flash=False, enable_math=True, enable_mem_efficient=True)" , which ensure flash_attention is false.
I try this, it work for me in RTX4090
Is there a strict requirement for GPUs that support flash_attention? I tried to test on V100, but this GPU does not support flash_attention, resulting in an error with the Runtime Error: No available kernel Aborting execution.
/Allegro/allegro/models/transformers/block.py:824: UserWarning: Memory efficient kernel not used because: (Triggered internally at ../aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:723.) hidden_states = F.scaled_dot_product_attention( /Allegro/allegro/models/transformers/block.py:824: UserWarning: Memory Efficient attention has been runtime disabled. (Triggered internally at ../aten/src/ATen/native/transformers/sdp_utils_cpp.h:495.) hidden_states = F.scaled_dot_product_attention( /Allegro/allegro/models/transformers/block.py:824: UserWarning: Flash attention kernel not used because: (Triggered internally at ../aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:725.) hidden_states = F.scaled_dot_product_attention( /Allegro/allegro/models/transformers/block.py:824: UserWarning: Flash attention only supports gpu architectures in the range [sm80, sm90]. Attempting to run on a sm 7.0 gpu. (Triggered internally at ../aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:201.) hidden_states = F.scaled_dot_product_attention( /Allegro/allegro/models/transformers/block.py:824: UserWarning: CuDNN attention kernel not used because: (Triggered internally at ../aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:727.) hidden_states = F.scaled_dot_product_attention( /Allegro/allegro/models/transformers/block.py:824: UserWarning: The CuDNN backend needs to be enabled by setting the enviornment variable
TORCH_CUDNN_SDPA_ENABLED=1
(Triggered internally at ../aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:496.)