sbarratt / inception-score-pytorch

Inception Score for GANs in Pytorch
MIT License
641 stars 125 forks source link

about entorpy #9

Open albb762 opened 6 years ago

albb762 commented 6 years ago

hi. Thank you for share your code. But I try it with some toy data. I found out, If we want to assign a high score to the data which has uniform P(y) and skew P(yx), we should use entropy(py, pyx), instead of entropy(pyx, py). And here is the code from openai: kl = part * (np.log(part) - np.log(np.expand_dims(np.mean(part, 0), 0))) https://github.com/openai/improved-gan/blob/master/inception_score/model.py

kmaeii commented 4 years ago

hi. Thank you for share your code. But I try it with some toy data. I found out, If we want to assign a high score to the data which has uniform P(y) and skew P(yx), we should use entropy(py, pyx), instead of entropy(pyx, py). And here is the code from openai: kl = part * (np.log(part) - np.log(np.expand_dims(np.mean(part, 0), 0))) https://github.com/openai/improved-gan/blob/master/inception_score/model.py

Hi, I have the same confuse with you, so, should i replace the 'entropy(py, pyx)' to 'kl = part * (np.log(part) - np.log(np.expand_dims(np.mean(part, 0), 0)))'. In my opinion, this is not equal.

sbarratt commented 4 years ago

In the original paper ( https://papers.nips.cc/paper/6125-improved-techniques-for-training-gans.pdf on page 4), it is ‘entropy(pyx, py)’, or the KL divergence between p(y|x) and p(y).

Shane

On Sun, Dec 22, 2019 at 12:33 AM kmaeii notifications@github.com wrote:

hi. Thank you for share your code. But I try it with some toy data. I found out, If we want to assign a high score to the data which has uniform P(y) and skew P(yx), we should use entropy(py, pyx), instead of entropy(pyx, py). And here is the code from openai: kl = part * (np.log(part) - np.log(np.expand_dims(np.mean(part, 0), 0))) https://github.com/openai/improved-gan/blob/master/inception_score/model.py

Hi, I have the same confuse with you, so, should i replace the 'entropy(py, pyx)' to 'kl = part * (np.log(part) - np.log(np.expand_dims(np.mean(part, 0), 0)))'. In my opinion, this is not equal.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/sbarratt/inception-score-pytorch/issues/9?email_source=notifications&email_token=AB7LUGM2PPKMXKWMQAQSK5TQZYLLHA5CNFSM4FN5IFL2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHO4GGI#issuecomment-568181529, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB7LUGKT6YZNYCVY3INHRN3QZYLLHANCNFSM4FN5IFLQ .

bomtorazek commented 3 years ago

scipy.stats.entropy uses the KL divergence if two distributions are given.

"If qk is not None, then compute the Kullback-Leibler divergence" You can check the document below, https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.entropy.html