Open jellersby opened 4 months ago
AFAIK you need an NVIDIA card (CUDA) to train SDXL.
AFAIK you need an NVIDIA card (CUDA) to train SDXL.
I've been able to train SDXL LoRAs without any problem.
Please try to use adafactor
optimizer, with optional args "scale_parameter=False" "relative_step=False" "warmup_init=False"
. It may use less VRAM than Prodigy.
AFAIK you need an NVIDIA card (CUDA) to train SDXL.
I've been able to train SDXL LoRAs without any problem.
Yeah, LoRAs are specifically designed to use significantly less VRAM and compute due to the lower-rank backprop passes. the person in the initial issue has an AMD card and is attempting a full FT of SDXL, something that is still very finicky to get to work even with the optimizations on Nvidia cards.
I've had good success training LoRAs, but the dreambooth script (sdxl_train.py) runs out of RAM on the first step. Any ideas? I feel like 24GB (7900 XTX) should be more than enough.
Resolution: 1024 Batch size: 1