Closed buntyke closed 8 years ago
1) You can fix any parameter for optimization, just have a look at the first notebooks in https://github.com/SheffieldML/notebook/blob/master/GPy/index.ipynb https://github.com/SheffieldML/notebook/blob/master/GPy/index.ipynb. 2) You could just fix one of the two models (Y0, Y1) and optimize for a bit, fix the other model and optimize another bit. This is all within your reach.
What functionality do you mean specifically?
On 9 Dec 2015, at 11:09, Nishanth Koganti notifications@github.com wrote:
I am currently using MRD to learn a shared manifold between high-dimensional space Y (7500) and low dimensional space Z (8). The Y-samples are noisy and so the training is leading to learning hyper parameter values where all the variance is given by the noise term in the kernel function.
I am interested in 2 modifications to MRD: 1) Is it possible to fix the noise parameters for the first few iterations while training? This is similar to the initVardistIters variable in the vargplvm toolbox. 2) Is it possible to initialize the latent space using bayesian gplvm instead of just using ppca? Similar to 'vargplvm' flag in vargplvm toolbox.
A general query: a lot of the functionality present in the vargplvm toolbox is dropped in GPy. Was there a specific reason for this?
Cheers, Nishanth
— Reply to this email directly or view it on GitHub https://github.com/SheffieldML/GPy/issues/286.
Thanks a lot for the useful information.
I was looking for a solution in the wrong place, like setting some sort of a flag within the MRD class definition as I am migrating from the matlab toolbox.
What functionality do you mean specifically? Again most of the flags in the toolbox that I was interested in such as 'initWithStatic' could be implemented using your advise.
I would also looking for the 'balanceModalityDim' option where all the low dimensional modalities are mapped to the highest dimensionality using a random matrix. But I could implement it by myself and need not be a part of GPy.
Sorry, I should have done my research about GPy in general before posting this issue. This issue can be closed.
I am currently using MRD to learn a shared manifold between high-dimensional space Y (7500) and low dimensional space Z (8). The Y-samples are noisy and so the training is leading to learning hyper parameter values where all the variance is given by the noise term in the kernel function.
I am interested in 2 modifications to MRD: 1) Is it possible to fix the noise parameters for the first few iterations while training? This is similar to the initVardistIters variable in the vargplvm toolbox. 2) Is it possible to initialize the latent space using bayesian gplvm instead of just using ppca? Similar to 'vargplvm' flag in vargplvm toolbox.
A general query: a lot of the functionality present in the vargplvm toolbox is dropped in GPy. Was there a specific reason for this?
Cheers, Nishanth