Closed kangyeolk closed 3 years ago
고맙습니다, 강열씨!
This is to stay a little bit safer from vanishing gradients. Consider a pixel where the ground truth value is 1.0. Then, without such range shift, the network will be trained to emit a large positive output
value so that tanh(output) ≈ 1.0
. The possible danger is that output
can happen to be so large that the gradient of tanh at output
will be too close to zero, hampering the training. After moving the range, gradients at extreme ground truth values become adequate while the output values are still limited.
I've been using this trick since my very first experiments. If you remove it, I'm sure everything will still work. After all, many people successfully train generative nets with plain tanh. But I just haven't checked.
Even more, I think you can even use plain linear activation (no sigmoid, tanh or anything) instead of this trick, and everything will still work.
It worked in Zakharov et al. So we borrowed it as-is from there to save on experiments. Today we realize that your concern is indeed quite reasonable, and that BNs or INs are likely to improve the system.
Thank you for your kind answer!
First, thank you for your awesome work! It is very helpful to me. I have two questions regarding the architectures.
https://github.com/shrubb/latent-pose-reenactment/blob/59629a64105c7c33fa01c461a3c65d3690f8533c/generators/vector_pose_unsupervised_segmentation_noBottleneck.py#L172-L174
Again, thanks!