{ckp_dir} does not exist, creating...
Init models...
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /home/murugan86/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 548M/548M [01:13<00:00, 7.80MB/s]
Compute mean (R, G, B) from 1800 images
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1800/1800 [00:03<00:00, 496.33it/s]
Mean(B, G, R) of Kimetsu are [-2.76436356 0.35512874 2.40923482]
Dataset: real 6656 style 1800, smooth 1800
Epoch 0/100
0%| | 0/1110 [00:00<?, ?it/s]/home/murugan86/.local/lib/python3.8/site-packages/torch/nn/functional.py:3060: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
warnings.warn("Default upsampling behavior when mode={} is changed "
/home/murugan86/.local/lib/python3.8/site-packages/torch/nn/functional.py:3103: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
warnings.warn("The default behavior for interpolate/upsample with float scale_factor changed "
0%| | 0/1110 [00:01<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 247, in
main(args)
File "train.py", line 167, in main
fake_img = G(img)
File "/home/murugan86/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, kwargs)
File "/home/murugan86/anime/AnimeGenDir/animagen-pytorch-mur/pytorch-animeGAN/modeling/anime_gan.py", line 56, in forward
out = self.res_blocks(out)
File "/home/murugan86/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, *kwargs)
File "/home/murugan86/.local/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/home/murugan86/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(input, kwargs)
File "/home/murugan86/anime/AnimeGenDir/animagen-pytorch-mur/pytorch-animeGAN/modeling/conv_blocks.py", line 98, in forward
out = self.ins_norm1(out)
File "/home/murugan86/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/murugan86/.local/lib/python3.8/site-packages/torch/nn/modules/instancenorm.py", line 55, in forward
return F.instance_norm(
File "/home/murugan86/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 2078, in instance_norm
return torch.instance_norm(
RuntimeError: CUDA out of memory. Tried to allocate 48.00 MiB (GPU 0; 3.82 GiB total capacity; 2.57 GiB already allocated; 30.75 MiB free; 2.57 GiB reserved in total by PyTorch)
Training triggers cuda out of memory, any workaround available?
murugan86@murugan86-IdeaPad-Gaming3-15ARH05D:~/anime/AnimeGenDir/animagen-pytorch-mur/pytorch-animeGAN$ python3 train.py --dataset Kimetsu --batch 6 --init-epochs 4 --checkpoint-dir {ckp_dir} --save-image-dir {save_img_dir} --save-interval 1 --gan-loss lsgan --init-lr 0.0001 --lr-g 0.00002 --lr-d 0.00004 --wadvd 10.0 --wadvg 10.0 --wcon 1.5 --wgra 3.0 --wcol 30.0
==== Train Config ====
dataset Kimetsu data_dir /content/dataset epochs 100 init_epochs 4 batch_size 6 checkpoint_dir {ckp_dir} save_image_dir {save_img_dir} gan_loss lsgan resume False use_sn False save_interval 1 debug_samples 0 lr_g 2e-05 lr_d 4e-05 init_lr 0.0001 wadvg 10.0 wadvd 10.0 wcon 1.5 wgra 3.0 wcol 30.0 d_layers 3 d_noise False