Closed BastienFR closed 3 years ago
Thanks a lot for your feedback. For real-value data with a Gaussian likelihood, you can use vecchia_approx = TRUE
and select an appropriate number of neighbors (30 is the default value) in order that computations scale well to large data:
gp_model <- GPModel(gp_coords = the_data$coords_train, cov_function = "exponential",
likelihood = "gaussian", vecchia_approx = TRUE, num_neighbors = 30)
For non-Gaussian data, the current implementation unfortunately does not scale well (yes O(n^3) in time and O(n^2) in memory). A Vecchia approximation is implemented, but it is one where matrices get dense again and, in general, it does not help a lot based on my experience.
You have spotted a good point here. This is likely the area that most urgently needs further research and development. I am very confident that something can be done here as there are many approaches out there for scaling computations with GPs to large data. But I cannot give you any guarantees when I can work on this. Contributions are welcome. Note that the bottleneck is not the calculation of the distances. But yes, the technical term for what you propose to do in 1. is "tapering" and something along the lines of this could work.
Thanks for your quick response. I'm glad you think like me that it's a relevant place to develop in the future. It's also good to know that the process can be speed up when using gaussian likelihood. Sadly, I have few cases using gaussian. I work mostly with poisson, gamma, and binomial. I'll still try to see if I can figure something out to continue my testing.
A couple of promising approaches for fitting GPs to large datasets. If you want exact GPs, the Black Box Matrix-Matrix (BBMM) method as implemented in GPyTorch is the state of the art, I believe:
https://arxiv.org/abs/1809.11165
However, this requires at least one GPU (even better with multiple). For a good approximation for latent GPs, the HIlbert space approximation is worth looking at:
https://arxiv.org/pdf/2004.11408.pdf
The latter paper includes a link to the Stan implementation, and should be relatively easy to implement here.
With version 0.6.0, compactly supported Wendland and tapered covariance functions (currently only exponential_tapered
) have been added (see e.g. here for background on this). This can be used when setting e.g. cov_function = "wendland"
. See here for more information. Note that you need to use optimizer_cov = "nelder_mead"
for large data.
You can now control the memory usage and the computational time using the taper range parameter (cov_fct_taper_range
). For instance, the example below based on the code by @BastienFR runs on my laptop in approx. 20-30 minutes for n=100'000
. See below for more details.
I finally got time to test your solution with my data and it works! Thanks a lot! My analysis on 163177 data points worked in a little less than 2 hours with almost no RAM usage using your settings. Now, my results are not that good but it's probably my fault or my data's fault! I'll keep working on it. Thanks again for all your work.
Thank you for your feedback. Apart from the usual tuning parameters in boosting, the taper range cov_fct_taper_range
is also a tuning parameter and changing it might give better results. With very small values the GP become ineffective, and one should obtain the same results as in classical "gradient" boosting. Further, you can also include the coordinates in the predictor variables data
. This improves predictive accuracy in case there are interactions between coordinates and other features.
Hi @fabsig , I also have to deal with the big data (about 5,000,000 data points). I am confused that how to deal with the big data by using fitGPModel function, although you guys have suggested some solutions. The code I used now is:fitGPModel(group_data=data.group, likelihood="binary",y=ys, X=predictors, params=list(std_dev=TRUE))
. For the methods you mentioned above that could work for big data, I should use the code: fitGPModel(group_data=data.group, likelihood="binary", y=ys, X=predictors, params=list(std_dev=TRUE),cov_function = "wendland",optimizer_cov = "nelder_mead")
. Is this correct? I don't familiar with this package very well, I'm not sure what parameters I should set. I would be very grateful if you could give me some advice. Looking farward for your reply.
Thank you for using GPBoost.
You can just use the code you mentioned first:
fitGPModel(group_data=data.group, likelihood="binary",y=ys, X=predictors)
You might also try the Nelder Mead optimizer as this is sometimes faster for large data:
fitGPModel(group_data=data.group, likelihood="binary",y=ys, X=predictors, params=list(optimizer_cov = "nelder_mead")
The cov_function
argument is only used for Gaussian process models, which you are not having.
OK. Got it. Thanks for your kindly reply.
I continued working with gpboost in hope to apply it to my own data. However, I bumped into what I think is a big limitation and would like to know if something can be done about it. I seems gpboost is very sensitive to sample size. Actually, calculation time seems to grow exponentially with sample size (
O(n^3)
) I use a modified version of your examples to illustrate the situation: First: I prepare the session:Then, I wrapped your data creation code into a function:
I set the different parameters and select the sample sizes I want to test:
Finally, I run gpboost on those sample sizes in a loop by saving the time taken to do the process :
I can than plot the time required:
which produced this plot:
We can see that passing 6000 data point, it gets really hard and slow to use gpboost which I personally believe this is low. I was initially expecting similar performance as for gpboost or lightgbm and we know that lighgbm can handles millions of data point without any problem. But it seems that the limiting factor is the random effect estimation. The same problem seems present in other mixed effect packages out there. Tree boosting is a powerful tool that allow us to obtain good quality predictions on data set with lots of observations and lots of variables. Gpboost seem to fail to harness this power because it added a random component to the model, but at the same time this random component is what make GPBoost so interesting. So could gpboost be adapt to handle larger amount of data? To give a context, I work in the insurance industry and we usually work with hundred of thousand if not millions of data points. This amount of data is generally needed because we are predicting rare events. In the particular dataset I'm working on right now to test yours and other methods, we have 114000 distinct training data points. On this many data points, it was rather a memory usage the problem rather than a timing problem (it clogged my 432gb Ram machine!). I subsampled it to 44000 distinct training points and memory wasn't a problem anymore but, even if I let the model running for over 4 days, the calculation was still not finished. I really think this approach has potential and is really useful. But to be truly democratized, it will have to be able to handle more data. I'm no programmer and no mathematician, so it's hard for me to contribute or propose solutions. I'll take a chance here anyway with some suggestions. Feel free to disregard them.