google-deepmind / sonnet

TensorFlow-based neural network library
https://sonnet.dev/
Apache License 2.0
9.75k stars 1.29k forks source link

(Maybe) in-consistency between VQ-VAE paper and its implementation. #252

Open Apollo1840 opened 2 years ago

Apollo1840 commented 2 years ago

FIrst of all, maybe it is my misunderstanding of the paper, so hope somebody could explain it for me, thanks! :


in the paper, the loss is defined as Screenshot from 2022-08-30 11-52-26

where e is the codebook defined at the beginning of the Section: Screenshot from 2022-08-30 11-57-36

So, in the paper, the codebook loss and commitment loss are MSE between z_e(x) and e.

However, in the implementation, they are implemented as MSE between z_e(x)(inputs) and z_q(x)(quantized), where variable quantized means quantized encoding of the image, namely z_q: Screenshot from 2022-08-30 11-58-19

Are they actually the same thing? why?

Apollo1840 commented 2 years ago

Probably, e in the loss formula in the paper actually stands for the z_q(x). The author did not write as z_q(x) because its calculation evolves argmin, which is not-differentiable. However, this is not a problem to implement it naively as z_q(x), because tensorflow, as well as pytorch, will stop the gradient before argmin operation, thus it works as intended and causes no BUG.

That is my new understanding.

Apollo1840 commented 2 years ago

Probably, e in the paper stands for the z_q(x). The author did not write as z_q(x) because its calculation evolves argmin, which is not-differentiable. However, this is not a problem to implement it naively as z_q(x), because tensorflow, as well as pytorch, will stop the gradient before argmin operation, thus it works as intended and causes no BUG.

That is my new understanding.

please close this if admin thinks this explanation is right.