Open dongdong092 opened 6 years ago
I've come across this as well in my reimplementation. I found that, for some reason which I'm still trying to determine, as soon as a 1x1 convolution is introduced, the activations for that layer and all subsequent layers start to behave very poorly, almost immediately locking into near constant output. I removed the 1x1 convolution layers, flattened the output of relu3, and substituted dense layers for the removed conv layers and was able to reproduce the results.
@nickmarton Hi, can I clarify -- what exactly did you modify? Was it the refiner or the discriminator conv1x1 layers that you substituted? And by "relu3" did you mean "conv_3"?
@nickmarton Hi Nick. Did you still apply a local adversarial loss then? If you did, would you mind explaining how you divided the dense layer into multiple sections. If not, then did you just apply a global adversarial loss?
Thank you
The refined image looks as same as the synthetic image while the result in the paper looks more different.It there need more steps to train?