bjlkeng / sandbox

Play time!
MIT License
194 stars 71 forks source link

What N means in cls_loss in vae-m2-fit-mnist.ipynb #5

Closed zas97 closed 5 years ago

zas97 commented 5 years ago

I don't understand what the N is in cls_loss in vae-m2-fit-mnist.ipynb in vae-semi_supervised_learning. Neither the original paper https://arxiv.org/pdf/1406.5298.pdf nor in your blog post it is specified. If I'm not mistaken in your implementation this variable correspond to the number of labeled samples , however, I looked at the implementation in the original paper https://github.com/dpkingma/nips14-ssl/blob/master/learn_yz_x_ss.py (line 248) and they are using nb_samples / nb_labeled samples as N instead. Could you clear my confusion?

Thank you and keep up your good work! I'm finding your blog really helpful =)

bjlkeng commented 5 years ago

Hi @zas97 thanks for your kind words!

That's a good question, I guess I also assumed it was either the number of labelled samples or the total number of samples (labelled + unlabelled). Although, I can't remember why I set it to 1000 now in that notebook. In any case, it's not really that important because they say that \alpha = 0.1 N, where \alpha is a hyperparameter. So theoretically, it's something you can tune.

bjlkeng commented 5 years ago

Closing this issue since no response for a while.