Closed Visual-Synthesizer closed 2 years ago
You can select which GPU to run on usin the --device option at launch time. However, I do not know how to code multi-GPU inference. Is there an example somewhere that I can review?
@warner-benjamin, I believe you contributed the device selection code. Do you know how to do what this user is asking?
@lstein Thank you for the prompt response. I found the --device 'cuda:0' option, and was able to launch multiple scripts at the same time on different GPU's using TMUX- which is a great workaround. I did allot of digging last night and it seems possible to do multi GPU inference with pytorch dataparallel, or perhaps Deepspeed- but I did not find a functioning example based off a text to image generator. This is based off a transformer: https://www.deepspeed.ai/tutorials/inference-tutorial/
I will ask around the different forums after work and see if I can come up with any code or further clues.
It might be easier to add a parallel-for loop that splits multiple generation requests using -n# to multiple GPUs then trying to make SD work with DeepSpeed. Perhaps in a new module? Would need to load the pre-load models to all GPUs.
Yeah, or the 'ol KISS method, spawn multiple instances of dream, each pointing to a different GPU. Pipe to a virtual file (there's a bug in that I think I saw)
Or, have a slightly modified version - the modifications subscribe to command messages from your favorite queue implementation - redis, sqs, asb, kafka, etc - Even works across multiple physical or virtual machines :)
use something like minio to store the images and you have your own dream factory. I saw someone had containerized SD, that might be a worthwhile endeavor to pull in.
I need to get around to publishing my redis based mechanism if anyone is interested
There's always CUDA_VISIBLE_DEVICES=K
i have a question if I have 2 gpus connected to the system does invoke use both if I create a image or just one?
InvokeAI does not use multiple GPU at the same time.
Really enjoying playing with this repo, thanks! Its there a way to use multiple GPUs on same system, and or to select which GPUs to use? I have 3 GPUs and would like to use them all at the same time for multi-GPU inference