So I have fixed this issue where my CUDA was disabled. However, when I generate anything with my 3070Ti, it does this and lags the computer a lot and it never gives me the results even when its at 100%..
Prompt: test
522696456
0%| | 0/20 [00:00<?, ?it/s]C:\Users\me\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\backends\cuda__init__.py:342: FutureWarning: torch.backends.cuda.sdp_kernel() is deprecated. In the future, this context manager will be removed. Please see, torch.nn.attention.sdpa_kernel() for the new context manager, with updated signature.
warnings.warn(
E:\AiFolder\SAudio\stable-audio-tools\stable_audio_tools\models\transformer.py:379: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
out = F.scaled_dot_product_attention(
90%|█████████████████████████████████████████████████████████████████████████▊ | 18/20 [00:04<00:00, 3.78it/s]C:\Users\me\AppData\Local\Programs\Python\Python38\lib\site-packages\torchsde_brownian\brownian_interval.py:599: UserWarning: Should have ta>=t0 but got ta=0.029999999329447746 and t0=0.03.
warnings.warn(f"Should have ta>=t0 but got ta={ta} and t0={self._start}.")
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00, 3.66it/s]
I have CUDA 12.1, Python 3.8.1 (3.10 had the same issue) and Pytorch+cu121
I'm wondering if this can be fixed? I want to use my 3070Ti to get faster results. Also, is flash_attn important to have?
Hello!
So I have fixed this issue where my CUDA was disabled. However, when I generate anything with my 3070Ti, it does this and lags the computer a lot and it never gives me the results even when its at 100%..
Prompt: test 522696456 0%| | 0/20 [00:00<?, ?it/s]C:\Users\me\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\backends\cuda__init__.py:342: FutureWarning: torch.backends.cuda.sdp_kernel() is deprecated. In the future, this context manager will be removed. Please see, torch.nn.attention.sdpa_kernel() for the new context manager, with updated signature. warnings.warn( E:\AiFolder\SAudio\stable-audio-tools\stable_audio_tools\models\transformer.py:379: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.) out = F.scaled_dot_product_attention( 90%|█████████████████████████████████████████████████████████████████████████▊ | 18/20 [00:04<00:00, 3.78it/s]C:\Users\me\AppData\Local\Programs\Python\Python38\lib\site-packages\torchsde_brownian\brownian_interval.py:599: UserWarning: Should have ta>=t0 but got ta=0.029999999329447746 and t0=0.03. warnings.warn(f"Should have ta>=t0 but got ta={ta} and t0={self._start}.") 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00, 3.66it/s]
I have CUDA 12.1, Python 3.8.1 (3.10 had the same issue) and Pytorch+cu121 I'm wondering if this can be fixed? I want to use my 3070Ti to get faster results. Also, is flash_attn important to have?