Open King-Of-Knights opened 7 years ago
I solve it by using some tricks! Thanks anyway!
@King-Of-Knights,
Can you share, whats the tricks.
Yes , It comes mainly from Soumith Chintala ganhacks,Since I don't know how to share the modified code. I can send you the email if you want to see the code @pribadihcr
@King-Of-Knights,
Yes please.
OK I have sent it to you, I have put my train weight in the file, so you can just start at the epoch 958(thought it display epoch 0, but it reload epoch 958 training weight) @pribadihcr
Thanks
see Pull Request
@pribadihcr please see here for the code update, there are some mistakes in that code!
Thanks for your great work in advance! I notice the paper author of "Conditional Image Synthesis with Auxiliary Classifier GANs" has applied their model in cifar10 and ImageNet. I guess I could modify your code and their hyper parameter to reconstruct their job. In the block of Generator, I did some modification to cater to the 3-channel figure: `def build_generator(latent_size):
we will map a pair of (z, L), where z is a latent vector and L is a
`------------------------------------------------------------------------ In the block of discriminator:
`def build_discriminator():
build a relatively standard conv net, with LeakyReLUs as suggested in
I use the default learning rate for both generator and discriminator, but after several Epoch, ` component | loss | generation_loss | auxiliary_loss
generator (train) | 0.00 | 0.00 | 0.00 generator (test) | 3.09 | 3.09 | 0.00 discriminator (train) | 0.59 | 0.00 | 0.59 discriminator (test) | 0.63 | 0.04 | 0.59 ` generator(test) loss will become bigger and bigger(Does it mean overfitting?) while other loss stay stabilize ,and the picture it generate just like trash. 👎 Any advices will be appreciated!! 👍