UCL-SML / Doubly-Stochastic-DGP

Deep Gaussian Processes with Doubly Stochastic Variational Inference
Apache License 2.0
147 stars 47 forks source link

Read Values of the optimization parameters after training #25

Open Hebbalali opened 6 years ago

Hebbalali commented 6 years ago

Hello, I have noticed that after training the DGP and using the command _model.readvalues() that it returns the same values as before the training. While the model has been correctly trained, in fact model.compute_log_likelihood() has been decreased before and after while the parameters remained the same. So i think that probably _model.readvalues() is not the correct function to read the values after the optimization? Thank you in advance for clarifying this question ! Ali

hughsalimbeni commented 6 years ago

Are you referring to the variational parameters q_mu and q_sqrt or the hyperparameters? One problem I've had is that the variational parameter numpy arrays don't get updated when using natural gradients (as they have the trainable flag as False). The other parameters should be working though. Could you provide an example?

In the mean time, though, running the tensorflow variable should always work. E.g.

sess = model.enquire_session()  # get the current session 
print(sess.run(model.p.constrained_tensor))  # prints the value of parameter p
Hebbalali commented 6 years ago

To read the values of the parameters i used as in gpflow model.read_values or model.as_pandas_table. However, for ALL the trainable parameters the values printed do not change befor and after the training. But, by runing the tensorflow variable as you suggested print(sess.run(model.p.constrained_tensor)) the true values of the parameters after training are printed.

hughsalimbeni commented 6 years ago

I've never actually used .read_valuesbefore, I've always just done print(model). Do you get the same issue for a vanilla gpflow model, e.g. SVGP? Also, can I check which version of gpflow you're using?