Right now, the discriminator has no regularization / dropout / anything to prevent overfitting. So if we have a small dataset, it just has to memorize which instances belong to which batches, and then if the samples look similar to how they originally look, the discriminator will remember them. So the autoencoder has incentives to make the outputs look very different from the inputs.
This is a problem that needs to be fixed. For testing purposes, it would probably be good use a small sample of artificially batched MNIST data and see if we can replicate the current issues with a small sample size.
Right now, the discriminator has no regularization / dropout / anything to prevent overfitting. So if we have a small dataset, it just has to memorize which instances belong to which batches, and then if the samples look similar to how they originally look, the discriminator will remember them. So the autoencoder has incentives to make the outputs look very different from the inputs.
This is a problem that needs to be fixed. For testing purposes, it would probably be good use a small sample of artificially batched MNIST data and see if we can replicate the current issues with a small sample size.