Open peki12345 opened 2 months ago
torch.cuda.OutOfMemoryError: CUDA out of memory.
How to run pipeline in several GPUs, like 4*4090
You can try this to in two GPUs: https://huggingface2.notion.site/How-to-split-Flux-transformer-and-run-inference-aa1583ad23ce47a78589a79bb9309ab0
You can try this to in two GPUs: https://huggingface2.notion.site/How-to-split-Flux-transformer-and-run-inference-aa1583ad23ce47a78589a79bb9309ab0
Could you kindly provide the script of how this method works with the main.py?
You can try this to in two GPUs: https://huggingface2.notion.site/How-to-split-Flux-transformer-and-run-inference-aa1583ad23ce47a78589a79bb9309ab0
It do not work. The transformer need 24GB, and controlnet 4GB, they have to be the same gpu.
You can try this to in two GPUs: https://huggingface2.notion.site/How-to-split-Flux-transformer-and-run-inference-aa1583ad23ce47a78589a79bb9309ab0
It do not work. The transformer need 24GB, and controlnet 4GB, they have to be the same gpu.
It worked, I used two 3090s to get results, but inpaint was poor, and poorly followed prompt to redraw
I can successfully run it using torchao using about 20G VRAM, the result is great
About 60G? This is too scary, can it be optimized?