Closed alan-pro closed 2 months ago
Flash attention is enabled in the Transformer decoder of Segment Anything 2 to allow faster calculations. However, as SAM2-UNet removes SAM2's decoder, you can simply ignore this warning. Our code also disables flash attention by default.
Ok, thanks
UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.) x = F.scaled_dot_product_attention( It seems to be a problem with Flash Attention, I am running on a Windows system, the GPU is 4070, do you know how to solve it?