invoke-ai / InvokeAI

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
23.5k stars 2.42k forks source link

Use multiple GPUs at same time? #97

Closed Visual-Synthesizer closed 2 years ago

Visual-Synthesizer commented 2 years ago

Really enjoying playing with this repo, thanks! Its there a way to use multiple GPUs on same system, and or to select which GPUs to use? I have 3 GPUs and would like to use them all at the same time for multi-GPU inference

lstein commented 2 years ago

You can select which GPU to run on usin the --device option at launch time. However, I do not know how to code multi-GPU inference. Is there an example somewhere that I can review?

@warner-benjamin, I believe you contributed the device selection code. Do you know how to do what this user is asking?

Visual-Synthesizer commented 2 years ago

@lstein Thank you for the prompt response. I found the --device 'cuda:0' option, and was able to launch multiple scripts at the same time on different GPU's using TMUX- which is a great workaround. I did allot of digging last night and it seems possible to do multi GPU inference with pytorch dataparallel, or perhaps Deepspeed- but I did not find a functioning example based off a text to image generator. This is based off a transformer: https://www.deepspeed.ai/tutorials/inference-tutorial/

I will ask around the different forums after work and see if I can come up with any code or further clues.

warner-benjamin commented 2 years ago

It might be easier to add a parallel-for loop that splits multiple generation requests using -n# to multiple GPUs then trying to make SD work with DeepSpeed. Perhaps in a new module? Would need to load the pre-load models to all GPUs.

Oceanswave commented 2 years ago

Yeah, or the 'ol KISS method, spawn multiple instances of dream, each pointing to a different GPU. Pipe to a virtual file (there's a bug in that I think I saw)

Or, have a slightly modified version - the modifications subscribe to command messages from your favorite queue implementation - redis, sqs, asb, kafka, etc - Even works across multiple physical or virtual machines :)

use something like minio to store the images and you have your own dream factory. I saw someone had containerized SD, that might be a worthwhile endeavor to pull in.

I need to get around to publishing my redis based mechanism if anyone is interested

reidsanders commented 2 years ago

There's always CUDA_VISIBLE_DEVICES=K

alfi4000 commented 10 months ago

i have a question if I have 2 gpus connected to the system does invoke use both if I create a image or just one?

jameswan commented 3 months ago

InvokeAI does not use multiple GPU at the same time.