Stability-AI / generative-models

Generative Models by Stability AI
MIT License
24.22k stars 2.7k forks source link

img2vid can we use multi GPUs to speed up inference? #322

Open khayamgondal opened 6 months ago

khayamgondal commented 6 months ago

Inference takes about 30 minutes for img2vid. Wondering is there a way to leverage multiple GPUs to improve speed? I have 8x 100 GPUs

Currently running using diffusers pipeline

from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained("local/path/stable-video-diffusion-img2vid-xt-1-1")
pipeline(image)
KazusaKitakawa commented 6 months ago

same question

shiwl0329 commented 3 months ago

same question

shiwl0329 commented 3 months ago

use CUDA_VISIBLE_DEVICES=3,4 python xxx, just use gpu 3