Closed Calemsy closed 7 months ago
I have some doubts about the calculation of perplexity
perplexity
e_mean = torch.mean(min_encodings, dim=0) perplexity = torch.exp(-torch.sum(e_mean * torch.log(e_mean + 1e-10)))
dim=0 is it to average a dimension of all samples, rather than averaging the embedding(from codebook) of the samples?
dim=0
embedding
thanks!
I have some doubts about the calculation of
perplexity
dim=0
is it to average a dimension of all samples, rather than averaging theembedding
(from codebook) of the samples?thanks!