alshedivat / keras-gp

Keras + Gaussian Processes: Learning scalable deep and recurrent kernels.
MIT License
247 stars 55 forks source link

P-Dimensional GP input shape #25

Open Marcbaer opened 5 years ago

Marcbaer commented 5 years ago

Dear Maruan,

I am working with a high dimensional problem (n,L,p) where n= #samples, L= input sequence of each feature and p= #features. I read your paper about recurrent deep kernels and keras-gp seems to be able to handle a p-dimensional gp-input. However, I tried to run your code with a higher gp input shape. Following the notation in your MSGP actuator example, I replaced the dataset and I changed the parameter:

gp_input_shape=p (line 55)

according to my number of features. The training time seems to explode even though the scalable MSGP model should efficiently handle the higher dimensional gp input shape as I read in your paper. Is there something I have to adjust before working with higher GP input shapes?

I would be very thankful for any help!

Best, Marc

alshedivat commented 5 years ago

Hi Marc, sorry for the slow reply.

MSGP training and inference times depend on the number of inducing points used for approximation. With the increase in the number of dimensions, if you use the grid-based approximation, the number of inducing points will grow exponentially with the number of dimensions.

To deal with high dimensionality, you can use a Dense layer to project points to a lower dimensional space (linearly or non-linearly), and then apply GPs on top of the projection. This is exactly the techique we used in deep (and recurrent) kernel learning.

Hope this helps.