This is from a user:
there is a way to get the profile MLE of the marginal variance (sorry, not the signal to noise ratio), conditional on the other parameters, which reduces the number of covariance parameters necessary for the optimization to be performed over. The fields package, for instance, does this to get its estimate of the marginal variance, rho, in its mKrig function.
I believe it works something like this: say K(theta1, theta2) = theta1*K2(theta2) is the covariance matrix depending on parameters theta1 and theta2 (theta2 might be a vector, but theta1 is scalar). Then the profile MLE of theta1 conditional on theta2 and the mean is theta1hat = r^T K2(theta2)^(-1) r where r are the residuals of the observations after subtracting the mean. If there was a way to incorporate this into GPvecchia it could speed up the optimization. However, I don't think this would be useable for Vecchia-Laplace approximations and non-Gaussian data.
This is from a user: there is a way to get the profile MLE of the marginal variance (sorry, not the signal to noise ratio), conditional on the other parameters, which reduces the number of covariance parameters necessary for the optimization to be performed over. The fields package, for instance, does this to get its estimate of the marginal variance, rho, in its mKrig function.
I believe it works something like this: say K(theta1, theta2) = theta1*K2(theta2) is the covariance matrix depending on parameters theta1 and theta2 (theta2 might be a vector, but theta1 is scalar). Then the profile MLE of theta1 conditional on theta2 and the mean is theta1hat = r^T K2(theta2)^(-1) r where r are the residuals of the observations after subtracting the mean. If there was a way to incorporate this into GPvecchia it could speed up the optimization. However, I don't think this would be useable for Vecchia-Laplace approximations and non-Gaussian data.