Open universewill opened 4 months ago
Are you using 1B or 3.6B? That will make a difference (though still pretty hefty).
Are you using 1B or 3.6B? That will make a difference (though still pretty hefty).
changed to 1B model, still cuda out of memory ...
I used an A100 40g vram gpu to train controlnet with 1 batch size and 512 image size, and get torch.cuda.OutOfMemoryError, and i switched to H100 with 80gb ram, still cuda out of memory.
How much gpu ram needed to train controlnet? Is there anyway to make the gpu vram usage smaller?