MishaLaskin / vqvae

A pytorch implementation of the vector quantized variational autoencoder (https://arxiv.org/abs/1711.00937)
641 stars 76 forks source link

Why your way to get quantized vectors is so tortuous? #2

Open voidism opened 5 years ago

voidism commented 5 years ago

Thanks for your awesome VQ-VAE implementation!

I read the code here, which is for getting quantized vectors:

https://github.com/MishaLaskin/vqvae/blob/d761a999e2267766400dc646d82d3ac3657771d4/models/quantizer.py#L55-L60

I am wondering why don't you just use the forward function of the nn.Embedding object self.embedding, like this:

min_encoding_indices = torch.argmin(d, dim=1)
z_q = self.embedding(min_encoding_indices)

(The two ways can get the same results.)

If I change the code like this, will I get into some trouble? (unable to pass gradient... etc)

Thank you so much!

vitaminzl commented 3 months ago

I have the same question...