In the networks.py, I saw the last layer for generator is tanh, and the last layer for discriminator is sigmoid. However, in the network architecture proposed on the paper, the generator and the discriminator doesn't seem to need the last activation layer. Why do we add them?
In the networks.py, I saw the last layer for generator is tanh, and the last layer for discriminator is sigmoid. However, in the network architecture proposed on the paper, the generator and the discriminator doesn't seem to need the last activation layer. Why do we add them?