[X] I have searched the existing issues and checked the recent builds/commits
What would your feature do ?
I'm aware that a single diffusion model cannot be split onto multiple GPU's. However, assuming an instance of the model is loaded onto each respective GPU, generation of image batches could be greatly sped up by splitting the batch across the available cards.
This would allow:
The same number of images generated in a smaller unit of time (batch size / no. of GPU's)
A larger batch generated within an identical span of time (batch size x no. of GPU's)
A similar workflow is implemented in SwarmUI, with ComfyUI as a backend.
This can also theoretically be done via loading two discrete instances of Stable Diffusion Web UI, but the syncing of prompts, sampler settings, and extension settings becomes an issue for workflow.
I propose multi-GPU support for Stable Diffusion Web UI via loading the selected model onto each card, and supporting generation parameter syncing through one instance of the Web UI.
Proposed workflow
Add launch arguments as necessary to specify multiple CUDA devices.
Under "Generation Parameters" in the Web UI, add option to increase instances, loading the selected model into each card(s).
If Batch Count > 1, then generate a batch on each available card, only sequencing the batches that exceed the no. of cards.
If Batch Count = 1, then split batch size between the available cards.
Is there an existing issue for this?
What would your feature do ?
I'm aware that a single diffusion model cannot be split onto multiple GPU's. However, assuming an instance of the model is loaded onto each respective GPU, generation of image batches could be greatly sped up by splitting the batch across the available cards.
This would allow:
A similar workflow is implemented in SwarmUI, with ComfyUI as a backend.
This can also theoretically be done via loading two discrete instances of Stable Diffusion Web UI, but the syncing of prompts, sampler settings, and extension settings becomes an issue for workflow.
I propose multi-GPU support for Stable Diffusion Web UI via loading the selected model onto each card, and supporting generation parameter syncing through one instance of the Web UI.
Proposed workflow
Additional information
No response