sklearn.gaussian_process.kernels.RationalQuadratic seems to work best with an R^2 of 0.91 (up from 0.88 for the Matern kernel). That said, RationalQuadratic is an infinite sum of RBT kernels, whereas the Matern kernel transform should be much easier to approximate with a finite-dimensional feature space.
I would like to stick with the Matern kernel, approximate it in a finite space, and pursue performance improvement by tweaking hyperparameters.
sklearn.gaussian_process.kernels.RationalQuadratic seems to work best with an R^2 of 0.91 (up from 0.88 for the Matern kernel). That said, RationalQuadratic is an infinite sum of RBT kernels, whereas the Matern kernel transform should be much easier to approximate with a finite-dimensional feature space.
I would like to stick with the Matern kernel, approximate it in a finite space, and pursue performance improvement by tweaking hyperparameters.