Open xuyaojian123 opened 3 months ago
import torch from vector_quantize_pytorch import VectorQuantize vq = VectorQuantize( dim = 256, codebook_size = 512, # codebook size decay = 0.8, # the exponential moving average decay, lower means the dictionary will change faster commitment_weight = 1. # the weight on the commitment loss ) x = torch.randn(1, 1024, 256) quantized, indices, commit_loss = vq(x) # (1, 1024, 256), (1, 1024), (1)
quantized and x are so different, how do you get them close? Can you give an example of training?
quantized
x
quantized
andx
are so different, how do you get them close? Can you give an example of training?