kongzhecn / OMG

[ECCV 2024] OMG: Occlusion-friendly Personalized Multi-concept Generation In Diffusion Models
https://kongzhecn.github.io/omg-project/
637 stars 44 forks source link

Is it possible to run this on combined VRAM from 2x 24GB GPUs at 48GB? #6

Closed Bookwald closed 4 months ago

Bookwald commented 8 months ago

I can run LLMs using VRAM from 2 GPUs. Will this be possible with OMG, especially with Automatic1111 or Comfy?

kongzhecn commented 6 months ago

That's a good idea. Using 2 GPUs for inference may help alleviate the issue of insufficient memory. You can infer the T2I model and LoRA separately on different GPUs, and then merge their inference results after each time step.

tanghengjian commented 6 months ago

hi, @Bookwald , i have completed this in 2*24G (L4 GPU) env. imply unet and decode model in GPU:1, other modules in GPU:0.

ykj467422034 commented 3 months ago

hi, @Bookwald , i have completed this in 2*24G (L4 GPU) env. imply unet and decode model in GPU:1, other modules in GPU:0.

hello,I am struggling with how to use a 24GB GPU for inference. Can you show me your code for training with two 24GB GPUs? I would greatly appreciate it!Thanks!