Closed Cleanup-Crew-From-Discord closed 1 year ago
yes its too new for the rocm version that torch is compiled with
yes its too new for the rocm version that torch is compiled with
Well, there go my dreams of 20gb of vram and crazy compute power (for now). If you or anyone else know, how long did it take for torch to update to a rocm version that supported the previous gen cards? I'd like to at least have some kind of guess for a time frame until it becomes usable again
Having exactly the same problem right now. This alternative kinda works, but it has limited functionality. I believe someone mentioned that RDNA2 got ROCm support only a year after it's release, so i'm not very optimistic
yes its too new for the rocm version that torch is compiled with
Well, there go my dreams of 20gb of vram and crazy compute power (for now). If you or anyone else know, how long did it take for torch to update to a rocm version that supported the previous gen cards? I'd like to at least have some kind of guess for a time frame until it becomes usable again
What compute power? Even the previous 6900XT got smoked by a 3050 out of the box with xformers in a stable diffusion benchmark becoz of the acceleration cuda provides. And that was also done in November this year which can also tell u about the support it has for RDNA2
From what I can tell, ROCm has at least partial support for RDNA3, but I have no idea how complete it is. I've tried to build pytorch myself, but it's quite difficult and I've kinda given up on it.
Having exactly the same problem right now. This alternative kinda works, but it has limited functionality.
Will definitely give this a look for now!
what guide are you following to use webui with your amd? I currently have a rx 5700 xt, I'm struggling to make it work. What guide do you recommend?
@ClashSAN consider transferring this to discussion since it is a torch related issue
what guide are you following to use webui with your amd? I currently have a rx 5700 xt, I'm struggling to make it work. What guide do you recommend?
there was also one on reddit ive lost the link to
the computing power it has. cuda (as a software) per se does not provide any computing power. cuda, as a hardware arquitectures does, just as CDNA/RDNA Whether the libraries are not optimized (sometimes just left to run generic functions) to make correct use of other vendors' gpus architectures due to popularity and mindshare and the $$ thrown by a specific vendor to push their own architecture and software is a different matter.
see the example of TopaZ Video Enhance AI, which makes balanced use of GPUs from NVIDIA, AMD and INTEL
now that amd, nvidia and intel all have AI accelerator units (not to mention accelerators of other kinds and vendors), libraries/frameworks developers can no longer be so lazy or so bought$
Whether the libraries are not optimized to make correct use of other vendors' gpus architectures due to popularity and mindshare and the $$ thrown by a specific vendor to support their own architecture and software is a different matter.
That is the only matter here. There is a reason why people choose nvidia GPU over amd GPU for DL despite higher prices Anyways, since you are using pytorch, you have to live with what pytorch supports.
Whether the libraries are not optimized to make correct use of other vendors' gpus architectures due to popularity and mindshare and the $$ thrown by a specific vendor to support their own architecture and software is a different matter.
That is the only matter here. There is a reason why people choose nvidia GPU over amd GPU for DL despite higher prices Anyways, since you are using pytorch, you have to live with what pytorch supports.
well, I mainly use TF. and yes, with my AMD...
I am referring to this repo not what you use in other projects Unless you don't use this repo
I am referring to this repo not what you use in other projects Unless you don't use this repo
I do, pure CPU compute, but I manage. meanwhile, I use an alternative as a sidestep, for faster testing, before a full generation. anyway...
Is there an existing issue for this?
What happened?
I have seen similar issues, but none specifically relating to users with new RDNA3 cards.
Following the guide to install on AMD based systems on linux, I run into the following error when launching:
One workaround mentioned by other users is adding HSA_OVERRIDE_GFX_VERSION=10.3.0 when calling launch.py to trick the system into using the gpu anyways. It worked for previous cards, but for me it abruptly segfaults.
(maybe I missed something in the wiki but i can't find any kind of log for said dump).
adding the --skip-torch-cuda-test causes stable diffusion to only use the CPU and is agonizingly slow.
The main line I noticed is "Warning: caught exception 'No HIP GPUs are available', memory monitor disabled." I think that this means the error comes from torch being unable to detect this card. rocminfo shows it as being properly detected by the system
Is this caused by RDNA3 cards simply being too new and not yet supported by torch?
Steps to reproduce the problem
Follow the steps to install for AMD GPUs, but with a new RDNA3 card (specifically a 7900XT)
What should have happened?
GPU being recognized at all / program not segfaulting
Commit where the problem happens
c6f347b81f584b6c0d44af7a209983284dbb52d2
What platforms do you use to access UI ?
Linux
What browsers do you use to access the UI ?
Mozilla Firefox
Command Line Arguments
Additional information, context and logs
System previously had a 2060 installed, but I removed it and rebuilt the stable-diffusion-webui folder for the new card. Worked flawlessly with said 2060.