I'm facing a issue when i have to resume the trainning, even when I reach 80+ epochcs, the transformer has no effect on the images with the trained weights. I tried Hayao and other custom datasets, here are images with 15 epochs (without resume), 30, 40, 60 and 80 epochs with Hayao Dataset, and the changes are barely visible, if existent, I'm using a copy of the Google Collab Notebook.
Is there anything I'm doing wrong with the process? Here are the params used for trainning:
Hi!
I'm facing a issue when i have to resume the trainning, even when I reach 80+ epochcs, the transformer has no effect on the images with the trained weights. I tried Hayao and other custom datasets, here are images with 15 epochs (without resume), 30, 40, 60 and 80 epochs with Hayao Dataset, and the changes are barely visible, if existent, I'm using a copy of the Google Collab Notebook.
Is there anything I'm doing wrong with the process? Here are the params used for trainning:
!python3 train.py --dataset 'Hayao'\ --batch 6\ --debug-samples 0\ --init-epochs 10\ --epochs 100 (Changed at each trainning resume)\ --checkpoint-dir {ckp_dir}\ --save-image-dir {save_img_dir}\ --save-interval 1\ --gan-loss lsgan\ --init-lr 0.0001\ --lr-g 0.00002\ --lr-d 0.00004\ --wadvd 10.0\ --wadvg 10.0\ --wcon 1.5\ --wgra 3.0\ --wcol 70.0\ --resume GD\ --use_sn\