Justin-Tan / high-fidelity-generative-compression

Pytorch implementation of High-Fidelity Generative Image Compression + Routines for neural image compression
Apache License 2.0
411 stars 77 forks source link

Why the abs layer in HyperpriorAnalysis is commented out? #15

Closed ZhangYuef closed 3 years ago

ZhangYuef commented 3 years ago

https://github.com/Justin-Tan/high-fidelity-generative-compression/blob/2bd710c433fa73602afba9842561fe0a2840c7fe/src/network/hyper.py#L58

As the original paper adds this abs layer in HyperpriorAnalysis module, I am wondering if there is a particular reason to choose to comment this line out.

Thanks for the explanation in advance.

Justin-Tan commented 3 years ago

Hi,

IMO that part in the paper (Fig. 4, 1) is a bit strange because the quantized latents $y$ can assume negative values as well, given that they're shifted to have zero mean and have non-negligible variance in general.

I checked the tf compression repo and they don't put the abs there either 2, but let me know if you think of a good reason why it should be there.

Cheers

ZhangYuef commented 3 years ago

Thanks for the reply! Your answer makes sense for me. I don't know why they put this abs layer in the paper either.

ZhangYuef commented 3 years ago

Hi @Justin-Tan , after checking tf compression repo's code I find that they do add a absolute operation on y before put it into hyper_analysis_transform.

  # Build autoencoder and hyperprior.
  y = analysis_transform(x)
  z = hyper_analysis_transform(abs(y))

Check the code here.