Open neildhir opened 7 years ago
@adamian do you have time to take a look at this?
Thanks Eero! My memory is that our GPDM functionality is/was rather limited. Have you got an impression of how easy/difficult it would be to add the functionality via a pull request, if you were confident enough to do that it would be great so that others could benefit from your work. Although I appreciate it may involve more structural changes than you'd be prepared to take on, but I think @adamian can advise.
On Thu, 3 Aug 2017 at 18:41 nd308 notifications@github.com wrote:
Apologies in advance if this is not the right place to ask this sort of thing, and if this is all available in the docs. Have not managed to find it.
I am curious as to my implementation of `mean-prediction' of Gaussian process dynamics models (GPDM) http://www.dgp.toronto.edu/%7Ejmwang/gpdm/nips05final.pdf, which is the same as for the Gaussian process latent variable model (GPLVM). Basically, I am not quite sure I've done the right thing. I want to use this for synthesising new observation space examples.
Here's some copy and paste from section 3.1 of the paper. I will be using a MWE for the GPLVM -- but they use the same mean-prediction method.
For both 3D people tracking and computer animation, it is desirable to generate new motions >efficiently. Here we consider a simple online method for generating a new motion, called mean->prediction, which avoids the relatively expensive Monte Carlo sampling used above. In mean-prediction, we consider the next time step $\tilde{x}$ conditioned on $\tilde{x}_{t-1}$ >from the Gaussian prediction:
And here follows equations (17) and (18).
This is a GPy implementation of how I think it should be done.
import GPy import np
Y = [IMPORT SOME TRAINING DATA OF CHOICE] D = Y.shape[1] Q = 10 # dimensionality of the latent space N = 1000 # number of time-steps in the training data sample_mean = np.mean(Y[:N,:]) sample_std = np.std(Y[:N,:]) sample_norm = (Y[:N,:] - sample_mean)/sample_std # normalisation kern = GPy.kern.RBF(Q, ARD=True) m = GPy.models.GPLVM(sample_norm, input_dim=Q, kernel=kern, init='PCA') m.optimize(messages=1, max_iters=10)
Or just grab one of the GPy examples from here, they should do the same thing. http://gpy.readthedocs.io/en/deploy/GPy.examples.html#module-GPy.examples.dimensionality_reduction
A simple function for prediction then uses the trained GPLVM:
def prediction(n):
n :: number of examples to synthesise
# get latent space X = m.X Kx = m.kern.K(X[0:N-1,:]) Kx_inv = np.linalg.inv(Kx) N_synt = n # number of time-steps we want to simulate X_synt = np.zeros((N_synt,Q)) X_last = X[-1:] # Mean prediction method X1 = X[:N-1,:] X2 = X[1:N,:].T def meanPrediction(X_last): k_x = m.kern.K(X1,X_last) meanX = np.dot(np.dot(X2,Kx_inv),k_x).flatten() return meanX # Get latent-space predictions for i in range(N_synt): X_synt[i,:] = meanPrediction(X_last.reshape(1,Q)) X_last = X_synt[i,:] # Synthesised observations Y_synt = np.zeros((N_synt,D)) Ky = m.kern.K(X) Ky_inv = np.linalg.inv(Ky) Y_t = sample_norm.T k_y = m.kern.K(X,X_synt) k_yy = m.kern.K(X_synt,X_synt) meanY = np.dot(np.dot(Y_t,Ky_inv),k_y) covY = (k_yy - np.dot(np.dot(k_y.T,Ky_inv),k_y)) for i in range(D): Y_synt[:,i] = np.dot(covY,np.random.randn(N_synt)) + meanY[i,:] # Shift back to un-normalised space Y_synt_denorm = sample_std * Y_synt + sample_mean return Y_synt_denorm
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/SheffieldML/GPy/issues/532, or mute the thread https://github.com/notifications/unsubscribe-auth/AAIKWgD4v6kIDDvIIgXiFFOAvfd_QsyDks5sUgY4gaJpZM4OsxAQ .
Well, it should be the same mean-prediction function for the GPLVM as well (lest I have misunderstood something).
I can take a stab at a pull request, but still need to know if this is correct or not :)
Apologies in advance if this is not the right place to ask this sort of thing, and if this is all available in the docs. Have not managed to find it.
I am curious as to my implementation of `mean-prediction' of Gaussian process dynamics models (GPDM), which is the same as for the Gaussian process latent variable model (GPLVM). Basically, I am not quite sure I've done the right thing. I want to use this for synthesising new observation space examples.
Here's some copy and paste from section 3.1 of the paper. I will be using a MWE for the GPLVM -- but they use the same mean-prediction method.
And here follows equations (17) and (18).
This is a
GPy
implementation of how I think it should be done.Or just grab one of the GPy examples from here, they should do the same thing.
A simple function for prediction then uses the trained GPLVM: