comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
53.58k stars 5.68k forks source link

Multiple gpu's #162

Open SuperComboGamer opened 1 year ago

SuperComboGamer commented 1 year ago

Is there anyway to use multiple gpu's for the same image or use multiple gpus for high batches to spread out the load.

78Alpha commented 1 year ago

It looks like it uses accelerate, so you could try

accelerate config

in the venv or environment and setup multi-gpu from there

comfyanonymous commented 1 year ago

Right now accelerate is only enabled in --lowvram mode.

The plan is to add an option to set the GPU comfyui will run on.

This is going to be further in the future but I'm planning on eventually adding support for connecting the UI to multiple comfyui backends at the same time so you can queue prompts on multiple GPUs/machines over the network.

SuperComboGamer commented 1 year ago

I figured out how to do multiple gpus for separate images on a different ui. But I want to be able to use 2 gpu's for one image at a time

WASasquatch commented 1 year ago

Right now accelerate is only enabled in --lowvram mode.

The plan is to add an option to set the GPU comfyui will run on.

This is going to be further in the future but I'm planning on eventually adding support for connecting the UI to multiple comfyui backends at the same time so you can queue prompts on multiple GPUs/machines over the network.

Would be amazing for running Comfy on farms, and remoting it in for jobs.

s-marcelle commented 1 year ago

Right now accelerate is only enabled in --lowvram mode.

The plan is to add an option to set the GPU comfyui will run on.

This is going to be further in the future but I'm planning on eventually adding support for connecting the UI to multiple comfyui backends at the same time so you can queue prompts on multiple GPUs/machines over the network.

Curious as to how far in the future we will have the ability to choose GPU because I am trying my best to have it running on my system's build in GPU. My inner noob crashed while doing so...

I honestly cant wait and again THANK YOU FOR THIS GREAT PIECE OF WORK

SuperComboGamer commented 1 year ago

Right now accelerate is only enabled in --lowvram mode. The plan is to add an option to set the GPU comfyui will run on. This is going to be further in the future but I'm planning on eventually adding support for connecting the UI to multiple comfyui backends at the same time so you can queue prompts on multiple GPUs/machines over the network.

Curious as to how far in the future we will have the ability to choose GPU because I am trying my best to have it running on my system's build in GPU. My inner noob crashed while doing so...

I honestly cant wait and again THANK YOU FOR THIS GREAT PIECE OF WORK

If you use easy diffusion it will let u use more than one gpu for different images at a time but one 2 gpus for one at the same time. I have went through around 100 ui allready for stable diffusion I have found thag cumfyui is the fastest working one so u could use easy diffusion to create a huge batch at a time then go to comfy ui to make alot of steps for a singular image.

WASasquatch commented 1 year ago

did I add multi-gpu support for easy diffusion? I can't even remember anymore.

Right now accelerate is only enabled in --lowvram mode. The plan is to add an option to set the GPU comfyui will run on. This is going to be further in the future but I'm planning on eventually adding support for connecting the UI to multiple comfyui backends at the same time so you can queue prompts on multiple GPUs/machines over the network.

Curious as to how far in the future we will have the ability to choose GPU because I am trying my best to have it running on my system's build in GPU. My inner noob crashed while doing so... I honestly cant wait and again THANK YOU FOR THIS GREAT PIECE OF WORK

If you use easy diffusion it will let u use more than one gpu for different images at a time but one 2 gpus for one at the same time. I have went through around 100 ui allready for stable diffusion I have found thag cumfyui is the fastest working one so u could use easy diffusion to create a huge batch at a time then go to comfy ui to make alot of steps for a singular image.

unphased commented 1 year ago

Can someone clarify if it's possible to "send" workflows defined by comfyui into EasyDiffusion to leverage the multiGPU capability?

kxbin commented 11 months ago

I hope to provide guidance on how to develop this feature

dnalbach commented 7 months ago

Being able to use multiple GPUs would really help in the future with stable diffusion video and whatever comes later. SVD uses dramatically more memory.

rrfaria commented 6 months ago

try this: https://github.com/city96/ComfyUI_NetDist

robinjhuang commented 3 months ago

Have you guys tried using Swarm to achieve this? https://github.com/mcmonkeyprojects/SwarmUI

bedovyy commented 3 months ago

HF diffusers can use Multi GPU in parallel using distrifuser or PipeFusion. https://github.com/mit-han-lab/distrifuser https://github.com/PipeFusion/PipeFusion

I have tested distrifuser, and the result was quite good. (I used run_sdxl.py --mode benchmark, it may be generating one image with 50 steps) 1x 3090 2x 3090 (PCIe x8/x8) 1x 4090 4090 + 3090 (PCIe x16/x4)
13.88824 s 7.93942 s 6.82159 s 8.04754 s

Is there plan to support such thing?

yggdrasil75 commented 2 months ago

now, with flux being massive I fear that larger models will become more common. my 3090 cant handle flux alone, it has to offload into system ram or disk. it would be nice to have the ability to split the workflow off onto my p40 so that the model isnt being loaded and unloaded from the main 3090. the p40 will slow down the 3090, but not nearly as much as system ram or swap space does, and it can process at least something while the swap would just sit there waiting.

yincangshiwei commented 1 month ago

HF diffusers can use Multi GPU in parallel using distrifuser or PipeFusion. https://github.com/mit-han-lab/distrifuser https://github.com/PipeFusion/PipeFusion

I have tested distrifuser, and the result was quite good. (I used run_sdxl.py --mode benchmark, it may be generating one image with 50 steps)

1x 3090 2x 3090 (PCIe x8/x8) 1x 4090 4090 + 3090 (PCIe x16/x4) 13.88824 s 7.93942 s 6.82159 s 8.04754 s Is there plan to support such thing?

This effect seems great. I also hope ComfyUI can be integrated and tried.