I'm currently experimenting domain adaptation. I've encountered a case where my CycleGAN reconstructs image A clearly, but image B is blurry. This is the current situation at 40 epochs, with 200k training images. Training losses seem to converge at this point. Any tips how to address this issue?
Currently training with image patches of size 16x16 from image size 256x256.
Hello!
I'm currently experimenting domain adaptation. I've encountered a case where my CycleGAN reconstructs image A clearly, but image B is blurry. This is the current situation at 40 epochs, with 200k training images. Training losses seem to converge at this point. Any tips how to address this issue?
Currently training with image patches of size 16x16 from image size 256x256.