Justin-Tan / high-fidelity-generative-compression

Pytorch implementation of High-Fidelity Generative Image Compression + Routines for neural image compression
Apache License 2.0
411 stars 77 forks source link

Why is it not recommended to normalize to (-1, 1) in the training mode it provides, but there are two models in the fine tuning pre training model that need to be normalized to (-1,1) #26

Closed ggxxding closed 3 years ago

ggxxding commented 3 years ago

Why is it not recommended to normalize to (-1, 1) in the training mode it provides, but there are two models in the fine tuning pre training model that need to be normalized to (-1,1)

Justin-Tan commented 3 years ago

Did I mention it shouldn't be normalized to (-1,1) somewhere? I think it makes more sense personally to normalize to a zero-mean range, that's a fairly standard preprocessing step in GANs and it seems to me that this eliminates the need for the network to learn the appropriate bias vector to shift the mean.

I may have gotten slightly better results empirically with (-1,1) and so recommended that, but IIRC it doesn't have much difference, you may want to try both modes if you have the compute budget though.

ggxxding commented 3 years ago

OK, thanks!