Open voidism opened 5 years ago
Thanks for your awesome VQ-VAE implementation!
I read the code here, which is for getting quantized vectors:
https://github.com/MishaLaskin/vqvae/blob/d761a999e2267766400dc646d82d3ac3657771d4/models/quantizer.py#L55-L60
I am wondering why don't you just use the forward function of the nn.Embedding object self.embedding, like this:
nn.Embedding
self.embedding
min_encoding_indices = torch.argmin(d, dim=1) z_q = self.embedding(min_encoding_indices)
(The two ways can get the same results.)
If I change the code like this, will I get into some trouble? (unable to pass gradient... etc)
Thank you so much!
I have the same question...
Thanks for your awesome VQ-VAE implementation!
I read the code here, which is for getting quantized vectors:
https://github.com/MishaLaskin/vqvae/blob/d761a999e2267766400dc646d82d3ac3657771d4/models/quantizer.py#L55-L60
I am wondering why don't you just use the forward function of the
nn.Embedding
objectself.embedding
, like this:(The two ways can get the same results.)
If I change the code like this, will I get into some trouble? (unable to pass gradient... etc)
Thank you so much!