Open KillyTheNetTerminal opened 3 weeks ago
oh, cool, but using rocm? I use directml on windows (Rx580) how did you install this? i get errors and my comfy installation corrupted
oh, cool, but using rocm? I use directml on windows (Rx580) how did you install this? i get errors and my comfy installation corrupted
Yes with rocm, never tried to use directml on windows myself, sorry.
hmm, requires flash_attn. How did you get it working with rocm?
TypeError: expected string or bytes-like object
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for flash_attn
Running setup.py clean for flash_attn
Failed to build flash_attn
hmm, requires flash_attn which requires nvidia no? How did you get it working with rocm?
TypeError: expected string or bytes-like object [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for flash_attn Running setup.py clean for flash_attn Failed to build flash_attn
I don't know how to help you, I used comfyui manager to install like everything I install in comfyui and it worked right away
I think it is since StableAudioSampler was updated today/yesterday? I installed it fine from the comfy manager last week. But today, no go. Anyway, I found there is a ROCm flash_attn from our friends @ AMD.com
This installs Flash Attention 2 ROCm and the node is available. In process of testing if it actually works but I'm confident. :-)
NO!! installed but complained I needed a MI200 or MI300 GPU when I ran a workflow with StableAudioSampler. GaAAhHhH!!
I'm a little confused about all this haha... flash attention is something that is not available in Directml? Rocm, what is it about? I installed a comfy Zluda version and it also disables that option
I think it is since StableAudioSampler was updated today/yesterday? I installed it fine from the comfy manager last week. But today, no go. Anyway, I found there is a ROCm flash_attn from our friends @ AMD.com
This installs Flash Attention 2 ROCm and the node is available. In process of testing if it actually works but I'm confident. :-)
I have to update^^ Tomorrow I may have more time to try it
I'm a little confused about all this haha... flash attention is something that is not available in Directml? Rocm, what is it about? I installed a comfy Zluda version and it also disables that option
I am not really good with the tech, but Rocm works on linux and soon on windows.
Well I tried, option one installed but told me it only works on MI 200/300 GPU's (it tells me that when I try to run a workflow) , the other option 'Triton' wont build, it fails getting the pytorch version.
No issues with ROCm, pytorch and onnxruntime work fine using ROCm 6.0.2 (stable)
As flash_attn errors out of building the wheel at the pytorch version check I am assuming it wants an older ROCm like 5.7 maybe? as this changes the version number of pytorch (adds rocm-6.0 to the version number)
Arch Linux Pythons 3.10 through 3.12 tested Pytorch ROCm 6.0.2 from official site (https://pytorch.org/get-started/locally/) Radeon RX 6900XT (gfx1030) Clean venv and just install pytorch then try to install flash_attn I tried just installing it from comfyui manager and there is no difference in the errors. it wants this flash_attn and I can't get it to work.
I'm a little confused about all this haha... flash attention is something that is not available in Directml? Rocm, what is it about? I installed a comfy Zluda version and it also disables that option.
seems on linux one of the dependencies for one of the requirements for StableAudioSampler is flash_attn, I have never come accross it before but there it is. Don't know what changed as I did install StableAudioSampler only a week ago without issues somwhere in between this flash_attn appeared as a dependency.
windows uses DirectML/Zluda. On Linux, for AMD, we use ROCm/OpenCL.
I'm a little confused about all this haha... flash attention is something that is not available in Directml? Rocm, what is it about? I installed a comfy Zluda version and it also disables that option
oops, sorry, then Iv hijacked your thread, just assumed linux, , err yes, As far as I am aware this works for AMD , just not today on linux LOL
err, hmm, ok, well, remove the flash_attn line from your requirements.txt, then do pip install -r requirements.txt everything goes fine and dandy without flash_attn, ComfyUI-StableAudioSampler workflow working fine and outputting music. Solved, pheeewww, tested on python 3.10
You should definitely post your own issue topic if you want the developper to take care of it
Currently using it on my ubuntu pc with AMD 6800XT graphic card, about 45 seconds for 45 seconds songs with defaults parameters (100 steps)