alimama-creative / FLUX-Controlnet-Inpainting

472 stars 29 forks source link

The GPU memory usage is too high #3

Open peki12345 opened 2 months ago

peki12345 commented 2 months ago

About 60G? This is too scary, can it be optimized?

8600862 commented 2 months ago

torch.cuda.OutOfMemoryError: CUDA out of memory.

microbenh commented 2 months ago

How to run pipeline in several GPUs, like 4*4090

JPlin commented 2 months ago

You can try this to in two GPUs: https://huggingface2.notion.site/How-to-split-Flux-transformer-and-run-inference-aa1583ad23ce47a78589a79bb9309ab0

c-steve-wang commented 2 months ago

You can try this to in two GPUs: https://huggingface2.notion.site/How-to-split-Flux-transformer-and-run-inference-aa1583ad23ce47a78589a79bb9309ab0

Could you kindly provide the script of how this method works with the main.py?

microbenh commented 2 months ago

You can try this to in two GPUs: https://huggingface2.notion.site/How-to-split-Flux-transformer-and-run-inference-aa1583ad23ce47a78589a79bb9309ab0

It do not work. The transformer need 24GB, and controlnet 4GB, they have to be the same gpu.

Nomination-NRB commented 1 month ago

You can try this to in two GPUs: https://huggingface2.notion.site/How-to-split-Flux-transformer-and-run-inference-aa1583ad23ce47a78589a79bb9309ab0

It do not work. The transformer need 24GB, and controlnet 4GB, they have to be the same gpu.

It worked, I used two 3090s to get results, but inpaint was poor, and poorly followed prompt to redraw

JPlin commented 3 weeks ago

27 Fixed some bugs, now need 28GB of VRAM.

xhinker commented 2 days ago

I can successfully run it using torchao using about 20G VRAM, the result is great