Open abdelrahmanabdelghany opened 1 year ago
when running demo.py ,I also out of memory。
Same problem, what's the minimal GPU memory requirement for running the inference? I'm on 11GB GPU RAM. I also tried hacking the imSize to e.g 384x480 or 480x600 to work around the memory problem, but it doesn't work as the tensor dimensions in the demo model seems to be binded to 512x640
running on colab pro A100, I think the minimum is 18 GB
When adding --gradient_checkpointing --use_8bit_adam the model training consumes 15GB although I'm not sure how it affects the results
accelerate launch finetune-unet.py --pretrained_model_name_or_path="CompVis/stable-diffusion-v1-4" --instance_data_dir=demo/sample/train --output_dir=demo/custom-chkpts --resolution=512 --train_batch_size=1 --gradient_accumulation_steps=1 --learning_rate=1e-5 --num_train_epochs=500 --dropout_rate=0.0 --custom_chkpt=checkpoints/unet_epoch_20.pth --revision "ebb811dd71cdc38a204ecbdd6ac5d580f529fd8c" --gradient_checkpointing --use_8bit_adam
runing finetune-unet.py on colab but i get cuda out of memory issue. Is it possible to run this on colab ?