Closed haoliangjiang closed 4 years ago
The Lua implementation (this repo) includes bias in conv layers. See this line. The Pytorch version removes it when the batchnorm is used. See this line.
Thank you for your clarification @junyanz . In the pytorch implementation, is there a specific reason for why it is removed or it is just the network design based on performance?
Batchnorm normalizes the Conv layer's output using mean and variance. So the conv basis will be removed. Batchnorm also has its own bias term.
Batchnorm normalizes the Conv layer's output using mean and variance. So the conv basis will be removed. Batchnorm also has its own bias term.
Okay, it makes sense. I would better review the batchnorm paper. Thank you.
Thank you for sharing the pytorch code of pix2pixgan and cyclegan!
I have one question regrading the batch norm in pix2pix. If I understand the code correctly, when using batch norm in pix2pix, all conv layers except the last one are initialized with no bias. Does anyone know why it is the case?