Open lilly1987 opened 1 year ago
The windows standalone build isn't set up for DirectML if I recall, and when I did the manual install I had to remove all the torch variants and then install just torch-directml to get it to pick things up correctly. As for whether it'll work on that card, I don't know.
The directml pipeline isn't very optimized and can't use the lowvram setting anyway:
lowvram_available = False #TODO: need to find a way to get free memory in directml before this can be enabled by default.
Since you're on a card without much vram and DirectML isn't particularly fast, unless you need something specific with the node based workflow I'd try NOD.ai SHARK for basic image generation / inpainting / outpainting stuff, it's much faster and you don't need to do anything weird to get it working on AMD.
Shark's problem is that it flattens and pre-tunes model + LoRA + VAE combos and downloads base models for everything so it eats disk space like a fat kid in a candy store if you're not keeping an eye on it. It's also the fastest thing for AMD right now, with compiled ONNX for DirectML lagging about 20% behind (and it has many of the same problems needing to pre-compile).
That said, Comfy has far more features and customizability so you might be stuck working with small images and models within the constraints of your card. Most of the UIs aren't planning on doing much about AMD until AMD gets off their butts and ports RoCM to Windows. Integrating llvm-iree into a UI that wasn't built around it is non-trivial and it's still too much a moving target to consider as a backend for smaller projects, IMO.
my GPU RX 6600 i want lowvram option
run
than
run
than 8GB use vram