kongzhecn / OMG

[ECCV 2024] OMG: Occlusion-friendly Personalized Multi-concept Generation In Diffusion Models
https://kongzhecn.github.io/omg-project/
559 stars 38 forks source link

Is it possible to run this on combined VRAM from 2x 24GB GPUs at 48GB? #6

Open Bookwald opened 3 months ago

Bookwald commented 3 months ago

I can run LLMs using VRAM from 2 GPUs. Will this be possible with OMG, especially with Automatic1111 or Comfy?

kongzhecn commented 1 month ago

That's a good idea. Using 2 GPUs for inference may help alleviate the issue of insufficient memory. You can infer the T2I model and LoRA separately on different GPUs, and then merge their inference results after each time step.

tanghengjian commented 1 month ago

hi, @Bookwald , i have completed this in 2*24G (L4 GPU) env. imply unet and decode model in GPU:1, other modules in GPU:0.