maestrojeong / t-SNE

implementation of t-SNE with tensorflow
22 stars 10 forks source link

Question: Implementation of t-SNE #1

Open jolespin opened 6 years ago

jolespin commented 6 years ago

First of all, thank you so much for posting your implementation of t-SNE publicly on GitHub. I have been trying to learn how t-SNE works and following your implementation is helping me get a grasp of how to use TensorFlow and how t-SNE actually works.

I did have a few questions though: (1) In def t_sne(y): what is nmap? (2) If t_sne is the function that is being minimized, what is happening in In[7]? (3) How did you choose 4 hidden layers? (4) Why did you increase the number of neurons for layer 3 above the number of input features? (5) Is there a reason you chose tf.nn.relu for the activation functions?

Sorry for all of the questions but this is really interesting stuff and your GitHub is literally the only resource that I've seen showing how it works!

maestrojeong commented 6 years ago

Sorry for late, (1) 'nmap' is the dimension of the output feature. Here I want the 3-dimensional t-sne figure therefore 'nmap=3'. (2) It is for calculating 'sigma' which is needed to calculate 'prob'. (3), (4), (5) is determined arbitrary. Thank you for your encouraging comments 'but this is really interesting stuff and your GitHub is literally the only resource that I've seen showing how it works!'.

I believe I read 't-sne' paper and other implementation to build the code with tensorflow. I should have cited the other implementations.