Closed ritagonmar closed 3 years ago
Hmm, I can't reproduce this. I've generated some random data and used the standard perplexity affinity kernel
np.random.seed(0)
data = np.random.normal(0, 10, size=(5000, 10))
A = affinity.PerplexityBasedNN(data)
and I get identical results for both runs. And the logs show it's called with lr=416.67 in both cases. Could you please send me the data you're using so I can debug this further, or -- if the data isn't public -- find another example where this doesn't work.
I tried again and I can't reproduce it either. I must have done something weird back then that I cannot recall. Sorry about that!
Hi Pavlin,
So I have notice something weird about the default learning rate when using
.optimize()
. In the documentation of.optimize()
it says the following:In my case, the default learning rate should be
n/12
, since it is larger than200
. I ran it without specifying the learning rate (assuming it would automatically usen/12
) and got a weird embedding. I reran it specifyinglearning_rate=n/12
and got a different embedding, that seemed to make more sense.I think maybe there is a problem with the default learning rate and it doesn't use
max(200, n/12)
.Steps to reproduce the behavior
I realised about this using this piece of code:
In the version that outputted the correct embedding I just specified the learning rate like this: