AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
143.48k stars 27.01k forks source link

[Feature Request]: Ability to Split Batch Generation Across Multi-GPU #16422

Open HvskyAI opened 3 months ago

HvskyAI commented 3 months ago

Is there an existing issue for this?

What would your feature do ?

I'm aware that a single diffusion model cannot be split onto multiple GPU's. However, assuming an instance of the model is loaded onto each respective GPU, generation of image batches could be greatly sped up by splitting the batch across the available cards.

This would allow:

  1. The same number of images generated in a smaller unit of time (batch size / no. of GPU's)
  2. A larger batch generated within an identical span of time (batch size x no. of GPU's)

A similar workflow is implemented in SwarmUI, with ComfyUI as a backend.

This can also theoretically be done via loading two discrete instances of Stable Diffusion Web UI, but the syncing of prompts, sampler settings, and extension settings becomes an issue for workflow.

I propose multi-GPU support for Stable Diffusion Web UI via loading the selected model onto each card, and supporting generation parameter syncing through one instance of the Web UI.

Proposed workflow

  1. Add launch arguments as necessary to specify multiple CUDA devices.
  2. Under "Generation Parameters" in the Web UI, add option to increase instances, loading the selected model into each card(s).
  3. If Batch Count > 1, then generate a batch on each available card, only sequencing the batches that exceed the no. of cards.
  4. If Batch Count = 1, then split batch size between the available cards.

Additional information

No response

Q424642444 commented 1 month ago

Agree!Agree!Agree!Agree!Agree!