Open AndreaBraschi opened 2 years ago
This might be unrelated but do you need to learn inducing locations per data dimensions ?
self.inducing_inputs = torch.randn(data_dim, n_inducing, latent_dim)
that's one way of doing it but one can just learn a shared set of inducing locations:
self.inducing_inputs = torch.randn(n_inducing, latent_dim)
Hi,
I am working on developing a Bayesian GP-LVM that can reproduce human motion trajectories.
I have already posted a question ( #1867), however I am also working on a version that assumes that the random variables in the latent space (X's) are i.i.d to see whether the GP-LVM can recognise the sequential dependence.
The dataset that I'm working with is made of 19 trials and each one of them has 101 data points for 24 degrees of freedom (DoF). Therefore, my Y is a torch Tensor of size (19, 101, 24). Reading the last part of paragraph 3.3.2 of Damianou, Titsias and Lawrence (2016)(https://jmlr.csail.mit.edu/papers/volume17/damianou16a/damianou16a.pdf), I am creating a latent space for each trial and trying to optimise for the hyperparameters of unique a f(X) that can be used in evaluation mode to make predictions when some DoF are missing. Basically, what I would like to do is maximise the VariationalELBO based on all the DoF of all trials, if that makes sense. I've stuck to a NormalPrior to define the prior over X since I'm assuming that the random variables in the latent space are independent and created batches that correspond to the number of trial I'm dealing with. During the optimisation, q(X) generates samples of the size that I would expect: _ntrials x length of each trial x latent dimensions. However:
The optimisation is successful if I loop through the trials, but that isn't exactly what I'm trying to do.
Anyway, here is the code. you can also find a folder that contains all the data. My code works fairly well when the learning is done upon a single trial (when Batch = False)
Any idea on what I'm doing wrong? Any help will be much appreciated.
Data.zip