I replaced the data in the horse2zebra folder with me, and the name of the folder has not changed. Change base_lr in exp_01.json to 0.0002 and max_step to 200. Then I use "python main.py --to_train=1 --log_dir=./output/AGGAN/exp_01 --config_filename=./configs/exp_01 .json" started training. The resulting loss map is as follows, why is there such a large fluctuation at 30 epoch?
I replaced the data in the horse2zebra folder with me, and the name of the folder has not changed. Change base_lr in exp_01.json to 0.0002 and max_step to 200. Then I use "python main.py --to_train=1 --log_dir=./output/AGGAN/exp_01 --config_filename=./configs/exp_01 .json" started training. The resulting loss map is as follows, why is there such a large fluctuation at 30 epoch?