cswry / SeeSR

[CVPR2024] SeeSR: Towards Semantics-Aware Real-World Image Super-Resolution
Apache License 2.0
390 stars 25 forks source link

Cuda out of memory(Training process) #37

Open Shengqi77 opened 5 months ago

Shengqi77 commented 5 months ago

Hellow ! I follow the following settings, and I used the NVIDIA GeForce RTX 3090 (24GB) to run the trianing code. However, I met the problem of cuda out of memory. Is it because the VRAM of the 3090ti graphics card is insufficient for training?

single gpu

CUDA_VISIBLE_DEVICES="0," accelerate launch train_seesr.py \ --pretrained_model_name_or_path="preset/models/stable-diffusion-2-base" \ --output_dir="./experience/seesr" \ --root_folders 'preset/datasets/train_datasets/training_for_seesr' \ --ram_ft_path 'preset/models/DAPE.pth' \ --enable_xformers_memory_efficient_attention \ --mixed_precision="fp16" \ --resolution=512 \ --learning_rate=5e-5 \ --train_batch_size=1 \ --gradient_accumulation_steps=2 \ --null_text_ratio=0.5 \ --dataloader_num_workers=0 \ --checkpointing_steps=10000

nathan66666 commented 4 months ago

I've encountered the same issue. Has the problem been resolved?