Closed waitwaitforget closed 6 years ago
@waitwaitforget it is ok for me. How do you run it? I have the same problem with your issue but i think this issue related to GAN's architecture and we must have some analysis in output to get a good epoch for our convergence that was mentioned in this paper (ALOCC). My test converge with about 40 epoch at MNIST dataset. I also run it with Fashion-MNIST dataset and ALOCC gives me the wonderful result. I think my paper with some contribution in ALOCC was accepted very soon.
@cod3r0k Well, I just run it as suggested in readme file. But the loss is becoming negative and small. It's a little wired. BTW congratulations on your work.
@cod3r0k Did you changed the hyper-parameters of the model?
I run the script:
python train.py --dataset mnist --dataset_address ./dataset/mnist --input_height 28 --output_height 28 --attention_label 6 --learning_rate 1e-4 --beta1 0.9
The results are like:
Dear Sir/madam
Thank you for your attention (@waitwaitforget @cod3r0k and also some of our email followers as Lie that mentioned this problem when I check my mail right now). As I mentioned before, I'm busy and sorry for the late reply. In the cleaning step of our implementation code, something was changed involuntarily. I forget to copy our labels in label parameter and I put it in logit parameter of sigmoid_cross_entropy_with_logits and this makes the collapse of your running. I change and push it in our repository and I hope your problem would be solved with this changes. If you have any further questions, please do not hesitate to contact.
The output of our program by that changes: The real output of refinement (generator in the concept of GAN) network :
Thanks a lot for your consideration and collaboration in advance Best regards, Mohammad
It seems that your code doesn't converge on MNIST dataset. Please check it out, thanks.