Closed tlatkowski closed 5 years ago
For inpainting task, applying BN to the discriminator should be helpful and it needs tuning structures & parameters. The improvement of using BN in the generator is not clear. To my best knowledge, BN is more suitable for discriminative tasks. Usually, there are different normalization methods used in the generator for the specific generation tasks, e.g., adaptive BN for image synthesis, instance normalization and AdaIN for style transfer. Actually, we designed a normalization layer in the coming cvpr2019 for the context prediction task, which is helpful for inpainting task.
I had tried to apply bn to both the generator and discriminator, but the improvement is not obvious (maybe due to my limited experiments), and it would reduce computational efficiency.
Tomasz Latkowski notifications@github.com 于2019年4月1日周一 下午11:24写道:
Hi @shepnerd https://github.com/shepnerd , What do you thing of applying additional batch normalization layers? Have you tried to utilize batch normalization to stabilize learning in generator and discriminators?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/shepnerd/inpainting_gmcnn/issues/6, or mute the thread https://github.com/notifications/unsubscribe-auth/AE1PCNjsUixB9J0q4Pm9eV62Kk5NdsFUks5vciSdgaJpZM4cWAJz .
Great thanks for your comprehensive reply!
Hi @shepnerd , What do you thing of applying additional batch normalization layers? Have you tried to utilize batch normalization to stabilize learning in generator and discriminators?