Closed diamond0910 closed 2 years ago
Hi, we train our model of resolution 1024 on 32 GB memory GPU. As shown in our paper, StyleSwin has a bigger model size than StyleGAN2, and generating images of 1024 resolution is also computational costly. Therefore, it would require large CUDA memory to train our model. To reduce the CUDA memory, please try to add "--use_checkpoint" when training, it would help to save a lot memory, only at the slight expense of training speed.
I train StyleSwin for FFHQ 1024 resolution. But I got this error:
RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 3; 23.65 GiB total capacity; 19.49 GiB already allocated; 474.00 MiB free; 22.10 GiB reserved in total by PyTorch) If res erved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I'm using 4 RTX 24G GPUs, and there'is not other programs. The batch is set to 2. Why is not enough this to train StyleSwin?