Open alansberman opened 5 years ago
To get it to work I changed the final conv layer of the Generator to: nn.ConvTranspose2d( ngf, nc, kernel_size=1, stride=1, padding=0, bias=False)
and the final conv layer of the Discriminator to: nn.Conv2d(ndf * 8, 1, 2, 2, 0, bias=False)
.
For reference, I added code that fixed this here
I'm trying to run the DCGAN on Imagenet 32x32, but am running into problems.
If I just change the
--imageSize
to 32, then the convolutional layers break and I get the errorRuntimeError: sizes must be non-negative
. I changed the kernel size of the final Generator layer to 1 and the kernel size of the final Discriminator layer to 2 (as per @rajaswa in this related isssue) but then I get a size mismatch errorValueError: Target and input must have the same number of elements. target nelement (64) != input nelement (256)
. I haven't made any other changes tomain.py
as I want to establish a baseline model.What other changes to the parameters/Generator/Discriminator do I need?