I'm trying to write my version of pix2pix models in PyTorch following the TensorFlow tutorial pix2pix: Image-to-image translation.
I wrote my version models by hand and followed the tutorial to ensure I didn't make any fatal mistakes. I tried training the models, but the discriminator's loss goes to 0 after the first epoch, and the generator's GAN loss only goes up.
I started adding anything small that I thought might've been the problem but was still the same. I then copied the Unet model and NLaterDiscriminator from this repo to make sure, but I still got the same results. I tried switching the loss from BCE to MSE, and no improvement at all.
I'm sure my code is 1-1 to this repo (on high-level terms, not a line-by-line basis), so I can't think of anything I missed by mistake.
I'm trying to write my version of pix2pix models in PyTorch following the TensorFlow tutorial pix2pix: Image-to-image translation. I wrote my version models by hand and followed the tutorial to ensure I didn't make any fatal mistakes. I tried training the models, but the discriminator's loss goes to 0 after the first epoch, and the generator's GAN loss only goes up. I started adding anything small that I thought might've been the problem but was still the same. I then copied the Unet model and NLaterDiscriminator from this repo to make sure, but I still got the same results. I tried switching the loss from BCE to MSE, and no improvement at all.
I'm sure my code is 1-1 to this repo (on high-level terms, not a line-by-line basis), so I can't think of anything I missed by mistake.
here's a link to the notebook:
PS8.zip
I would appreciate it if you could point out what I've done wrong since my code is based on this repo, so there shouldn't be any issues in theory.