carpedm20 / simulated-unsupervised-tensorflow

TensorFlow implementation of "Learning from Simulated and Unsupervised Images through Adversarial Training"
Apache License 2.0
575 stars 146 forks source link

The result looks different from the paper #28

Open dongdong092 opened 6 years ago

dongdong092 commented 6 years ago

The refined image looks as same as the synthetic image while the result in the paper looks more different.It there need more steps to train?

nickmarton commented 6 years ago

I've come across this as well in my reimplementation. I found that, for some reason which I'm still trying to determine, as soon as a 1x1 convolution is introduced, the activations for that layer and all subsequent layers start to behave very poorly, almost immediately locking into near constant output. I removed the 1x1 convolution layers, flattened the output of relu3, and substituted dense layers for the removed conv layers and was able to reproduce the results.

lyanne228 commented 6 years ago

@nickmarton Hi, can I clarify -- what exactly did you modify? Was it the refiner or the discriminator conv1x1 layers that you substituted? And by "relu3" did you mean "conv_3"?

daniel-merrick commented 5 years ago

@nickmarton Hi Nick. Did you still apply a local adversarial loss then? If you did, would you mind explaining how you divided the dense layer into multiple sections. If not, then did you just apply a global adversarial loss?

Thank you