I running inference with the trained vq embedding matrix. In the example ipynb only gives training instructions. I wonder if the edits I made to the source code are valid.
In vqvae.py, quantized vector is defined as below, and
LINE 98
quantized= self.quantize(encoding_indices)
The it then gets updated through the following lines,
LINE 100 to LINE 104
e_latent_loss = tf.reduce_mean((tf.stop_gradient(quantized) - inputs) ** 2)
q_latent_loss = tf.reduce_mean((quantized - tf.stop_gradient(inputs)) ** 2)
loss = q_latent_loss + self._commitment_cost * e_latent_loss
quantized = inputs + tf.stop_gradient(quantized - inputs)
And it seems that for inference, a non-updated version of this vector must be fed into the decoder instead.
So I made the following edits to the code, to return the non-updated quantize vector as well.
LINE 98
quantized= self.quantize(encoding_indices)
quantized_frozen = quantized
I running inference with the trained vq embedding matrix. In the example ipynb only gives training instructions. I wonder if the edits I made to the source code are valid.
In vqvae.py, quantized vector is defined as below, and
The it then gets updated through the following lines,
And it seems that for inference, a non-updated version of this vector must be fed into the decoder instead.
So I made the following edits to the code, to return the non-updated quantize vector as well.
Thank you!