Open leescorpio opened 4 months ago
Does AMD use xformers? I did actually reduce cuda dependency enough to allow this to run on Macs, but I don't have access to AMD card to test with. It currently disables xformers if you do not have it installed, so based on the above you must have it installed?
AMD 是否使用 xformers?我实际上确实减少了 cuda 依赖性,足以允许它在 Mac 上运行,但我无法使用 AMD 卡进行测试。如果您没有安装它,它目前会禁用 xformers,因此基于上述内容,您必须安装它吗?
Thank you for your reply
Does AMD use xformers? I did actually reduce cuda dependency enough to allow this to run on Macs, but I don't have access to AMD card to test with. It currently disables xformers if you do not have it installed, so based on the above you must have it installed?
ubuntu 22.04LTS+ROCm6.02+RX 6800(vRAM:16GB LLVM:gfx1030)+32G DDR4运行时报错/卡死,显存和内存都占满了,无法及时释放。无论是小图放大或者大图清晰化都是一样的问题,将缩放系数和CFG减小会将出现问题的时间延后但解决不了根本问题
Im having the same problem, running ubtuntu with ROCM using my amd 6900xt
Im having the same problem, running ubtuntu with ROCM using my amd 6900xt
I don't know anything about using AMD, but I know someone who has ran this on some AMD gpu. Do you have xformers installed? Is it usually used with ROCM? I'd try just uninstalling it if it's not necessary, as it isn't with cuda or MPS.
Im having the same problem, running ubtuntu with ROCM using my amd 6900xt
I don't know anything about using AMD, but I know someone who has ran this on some AMD gpu. Do you have xformers installed? Is it usually used with ROCM? I'd try just uninstalling it if it's not necessary, as it isn't with cuda or MPS.
Yup that fixed it!
Im having the same problem, running ubtuntu with ROCM using my amd 6900xt
I don't know anything about using AMD, but I know someone who has ran this on some AMD gpu. Do you have xformers installed? Is it usually used with ROCM? I'd try just uninstalling it if it's not necessary, as it isn't with cuda or MPS.
ROCm无法使用xformers,目前只有nVidia的CUDA才可以使用
ROCm cannot use xformers, currently only nVidia's CUDA can be used
Error occurred when executing SUPIR_Upscale:
No operator found for
memory_efficient_attention_forward
with inputs: query : shape=(1, 16384, 1, 512) (torch.bfloat16) key : shape=(1, 16384, 1, 512) (torch.bfloat16) value : shape=(1, 16384, 1, 512) (torch.bfloat16) attn_bias : p : 0.0decoderF
is not supported because: max(query.shape[-1] != value.shape[-1]) > 128 xFormers wasn't build with CUDA support attn_bias type is operator wasn't built - seepython -m xformers.info
for more infoflshattF@0.0.0
is not supported because: max(query.shape[-1] != value.shape[-1]) > 256 xFormers wasn't build with CUDA support operator wasn't built - seepython -m xformers.info
for more infotritonflashattF
is not supported because: max(query.shape[-1] != value.shape[-1]) > 128 xFormers wasn't build with CUDA support operator wasn't built - seepython -m xformers.info
for more info triton is not available Only work on pre-MLIR triton for nowcutlassF
is not supported because: xFormers wasn't build with CUDA support operator wasn't built - seepython -m xformers.info
for more infosmallkF
is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support dtype=torch.bfloat16 (supported: {torch.float32}) operator wasn't built - seepython -m xformers.info
for more info unsupported embed per head: 512