MishaLaskin / vqvae

A pytorch implementation of the vector quantized variational autoencoder (https://arxiv.org/abs/1711.00937)
656 stars 79 forks source link

question about perplexity #16

Closed Calemsy closed 7 months ago

Calemsy commented 7 months ago

I have some doubts about the calculation of perplexity

e_mean = torch.mean(min_encodings, dim=0)
perplexity = torch.exp(-torch.sum(e_mean * torch.log(e_mean + 1e-10)))

dim=0 is it to average a dimension of all samples, rather than averaging the embedding(from codebook) of the samples?

thanks!