openai / glow

Code for reproducing results in "Glow: Generative Flow with Invertible 1x1 Convolutions"
https://arxiv.org/abs/1807.03039
MIT License
3.11k stars 515 forks source link

Different implementation of activation normalization from the one described in the paper. #114

Open shuohantao opened 3 months ago

shuohantao commented 3 months ago

In the paper, actnorm is guaranteed to be invertible as long as the learned vector s contains non-zero element. The code implementation doesn't guarantee invertability. I wonder why the actual implementation is different.