Open unclemusclez opened 2 weeks ago
Do you just need comfyui to work? If so, try WSL with ROCm. It supports Flash Attention 2. https://www.amd.com/en/resources/support-articles/release-notes/RN-RAD-WIN-24-10-21-01-WSL-2.html
Do you just need comfyui to work? If so, try WSL with ROCm. It supports Flash Attention 2. https://www.amd.com/en/resources/support-articles/release-notes/RN-RAD-WIN-24-10-21-01-WSL-2.html
im trying it now.... when did this come out?
Very recently. Are you on gfx1100? (RX 7900 XT(X), GRE, etc)
Very recently. Are you on gfx1100? (RX 7900 XT(X), GRE, etc)
yes, 7900xt
So i've been testing the ROCm driver for WSL.
There are sill use-cases for ZLUDA with PyTorch, particularly pertaining to https://github.com/hpcaitech/Open-Sora. seems to need CUDA.
i find ROCm is about 2-3x faster than ZLUDA with Pytorch
i compiled ZLUDA
Finished `release` profile [optimized] target(s) in 5m 40s
i dowloadednccl
from NVIDIA and placed it inside of the ZLUDA directoryP:\gitrepos\ZLUDA\nccl_2.21.5-1+cuda11.0_x86_64
with
pytorch-build.bat
:is it possible with this configuration to set
torch.backends.cudnn.enabled = True
?the error i get with
torch.backends.cudnn.enabled = True
. perhaps it is unrelated, but i am just trying to allow for xformers to function.