UBCDingXin / improved_CcGAN

Continuous Conditional Generative Adversarial Networks (CcGAN)
https://arxiv.org/abs/2011.07466
MIT License
115 stars 34 forks source link

about --visualize_fake_images #6

Closed mostar39 closed 2 years ago

mostar39 commented 2 years ago

Hello ! I am a master's student studying GAN hard in Korea! First of all, thank you so much for posting such a great paper and code!

I tried using the RC-49_256x256 data to proceed with learning! And I tried to save the output value in the middle of training.

By the way, I was able to confirm that the result came out strangely like the picture below.. Can you tell me why? Maybe I did something wrong.. T_T

The first picture is 6000.png, the second picture is 10000.png!

image image

What I wrote in the terminal: python main.py --root_path /home/ylab3/improved_CcGAN/RC-49/RC-49_256x256/CcGAN-improved/ --data_path /home/ylab3/improved_CcGAN/datasets/RC-49/ --eval_ckpt_path output/ --show_real_imgs --visualize_fake_images

UBCDingXin commented 2 years ago

Hi @mostar39 , Thanks for your interest in our work. It seems your bash script does not set many important options. To reproduce our results, please follow the provided script for training CcGAN (SVDL+ILI) on 256x256 RC-49 at https://github.com/UBCDingXin/improved_CcGAN/blob/3f2660c4a466240b7b3896e8e2ce7aaad759862a/RC-49/RC-49_256x256/CcGAN-improved/scripts/run_train.sh#L61

mostar39 commented 2 years ago

thank you ! I'm sorry I didn't check properly.

UBCDingXin commented 2 years ago

@mostar39 You are welcome.

mostar39 commented 2 years ago

@UBCDingXin

Hello !

I am doing as you advised.

RuntimeError: CUDA out of memory. Tried to allocate 800.00 MiB (GPU 0; 23.70 GiB total capacity; 21.59 GiB already allocated; 400.56 MiB free; 21.63 GiB reserved in total by PyTorch)

Currently, we are not going back to the limits of the GPU's capabilities. I'm doing it by changing the batchsize, but it's still not working

Any new tips?

python main.py --root_path /home/ylab3/improved_CcGAN/RC-49/RC-49_256x256/CcGAN-improved/ --data_path /home/ylab3/improved_CcGAN/datasets/RC-49 --eval_ckpt_path ./RC-49/RC-49_256x256/CcGAN-improved/output/eval_models --seed 2020 --num_workers 0 --min_label 0 --max_label 90 --img_size 256 --max_num_img_per_label 25 --max_num_img_per_label_after_replica 0 --GAN CcGAN --GAN_arch SAGAN --niters_gan 30000 --resume_niters_gan 0 --loss_type_gan hinge --save_niters_freq 2500 --visualize_freq 1000 --batch_size_disc 80 --batch_size_gene 80 --num_D_steps 2 --lr_g 1e-4 --lr_d 1e-4 --dim_gan 256 --dim_embed 128 --kernel_sigma -1.0 --threshold_type soft --kappa -2.0 --gan_DiffAugment --gan_DiffAugment_policy color,translation,cutout --visualize_fake_images --comp_FID --samp_batch_size 500 --FID_radius 0 --FID_num_centers -1 --num_eval_labels -1 --nfake_per_label 200 --dump_fake_for_NIQE 2>&1 | tee output_CcGAN_30K.txt

UBCDingXin commented 2 years ago

Hi @mostar39 , Your batch size 80 seems too small. I never tested CcGAN on such a small batch size. I suggest you either try on GPU with larger GPU memory or experiment on lower resolution RC-49 datasets (e.g., 64x64 or 128x128). If you don't have enough GPU memory, I strongly suggest you get familiar with the CcGAN training on the lower-resolution RC-49 first instead of the 256x256 RC-49.