Open psych0v0yager opened 2 weeks ago
Are there plans to support multiple GPUs. Currently when I run app.py (passing in --device cuda) I receive an OOM, even though my second GPU is completely empty
I find most diffusion based models cannot be split across gpus during inferencing; I hope one day soon that this changes
That’s unfortunate. At least the —save-memory option works pretty well on a 3090
—save-memory
Are there plans to support multiple GPUs. Currently when I run app.py (passing in --device cuda) I receive an OOM, even though my second GPU is completely empty