chaiyujin / glow-pytorch

pytorch implementation of openai paper "Glow: Generative Flow with Invertible 1×1 Convolutions"
MIT License
507 stars 80 forks source link

Inquiry about actnorm layer implementation #27

Open TaeilJin opened 3 years ago

TaeilJin commented 3 years ago

Hi

Thank you for publishing your code!

I have researched my work using Glow structure, and it is really helped to me.

I have a question about actnorm layer.

On this line, bias is a mean of input X, and then multiply -1.0 for calculating vars and logs.

https://github.com/chaiyujin/glow-pytorch/blob/487a6b149295f4ec4b36e408f63604c593ff2031/glow/modules.py#L37

In my knowledge, we should multiply -1.0 again to generate mean vector, but the code didn't multiply -1.0 and then copy bias data to learning parameter.
https://github.com/chaiyujin/glow-pytorch/blob/487a6b149295f4ec4b36e408f63604c593ff2031/glow/modules.py#L40

I would like to ask I am correct or not. Thank you once again and look forward to your response!

chaiyujin commented 3 years ago

@TaeilJin The ActNorm layer is supposed to bias and scale inputs to N(0, 1) after initializaiton. To achieve that, the inputs should minus it's mean value and then be scaled by std.