cornellius-gp / gpytorch

A highly efficient implementation of Gaussian Processes in PyTorch
MIT License
3.46k stars 546 forks source link

[Test Failure] test_fantasy_updates_cuda #489

Open KeAWang opened 5 years ago

KeAWang commented 5 years ago

The master branch currently fails this test in addition to #483

I'm using a recent pytorch nightly

>>> torch.__version__
'1.0.0.dev20190128'

The unittest failure:

FAIL: test_fantasy_updates_cuda (test.examples.test_simple_gp_regression.TestSimpleGPRegression)
----------------------------------------------------------------------
Traceback (most recent call last):
  File ".../gpytorch/test/examples/test_simple_gp_regression.py", line 165, in test_fantasy_updates_cuda
    self.test_fantasy_updates(cuda=True)
  File ".../gpytorch/test/examples/test_simple_gp_regression.py", line 225, in test_fantasy_updates
    self.assertTrue(approx_equal(test_function_predictions.mean, fant_function_predictions.mean))
AssertionError: tensor(0, device='cuda:0', dtype=torch.uint8) is not true
KeAWang commented 5 years ago

However it doesn't fail on pytorch stable 1.0.0

Balandat commented 5 years ago

still failing....

neighthan commented 5 years ago

I think I might be running into this same issue? I have two GP models (both with the same structure and hyperparameters). For one model, I use set_data to give it n data points. For the other, I give it the first n/2 points with set_data then the second n/2 with get_fantasy_model. All checks indicate that the training inputs and targets are the same between the two GPs. However, their predictions are not the same. Oddly, if I do gp2.set_train_data(gp2.train_inputs[0], gp2.train_targets) (i.e. set the train data of the fantasy GP to be what it already is) then the predictions come out the same. So this seems to be related to the updating of the test-time caches that happens in get_fantasy_model.