cornellius-gp / gpytorch

A highly efficient implementation of Gaussian Processes in PyTorch
MIT License
3.54k stars 557 forks source link

Minor patch to Matern covariances #2378

Closed j-wilson closed 1 year ago

j-wilson commented 1 year ago

This PR is intended as a minor patch to ensure that MaternCovariance and MaternKernel both produce identical outputs in batched and non-batched settings. Currently, these methods produce slightly different batched vs non-batched outputs due to how a shift is handled internally. Examples are given below.

Pre-PR:

kernel = MaternKernel(nu=2.5)
X = torch.rand(2, 4, 2)
kernel(X).to_dense()[0] - kernel(X[0]).to_dense()
> tensor([[ 0.0000e+00,  0.0000e+00,  0.0000e+00,  1.1102e-16],
        [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00],
        [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00],
        [ 1.1102e-16, -2.2204e-16,  0.0000e+00,  0.0000e+00]],
       grad_fn=<SubBackward0>)

Post-PR:

kernel = MaternKernel(nu=2.5)
X = torch.rand(2, 4, 2)
kernel(X).to_dense()[0] - kernel(X[0]).to_dense()
> tensor([[0., 0., 0., 0.],
        [0., 0., 0., 0.],
        [0., 0., 0., 0.],
        [0., 0., 0., 0.]], grad_fn=<SubBackward0>)