Open Taha-Bahadori opened 4 years ago
I don't think it's obvious how to accomplish this in the exact GP setting where the loss doesn't factorize over data and the data is only accessed through kernel matrix entries. In the variational setting, I think this would be straightforward since at least the expected log likelihood term is directly a sum over data points.
All you'd have to do is extend VariationalELBO and override to replace the sum here with a weighted sum: https://github.com/cornellius-gp/gpytorch/blob/8f9b44fc57dbb0a13b568946f07a37e9332f92c4/gpytorch/mlls/variational_elbo.py#L61
Note that if you define a weights=None
kwarg in your extension, you can call mll(output, y_batch, weights=...)
to get them there.
This kind of weighting is semiweird though from a statistical perspective, as it corresponds to raising your actual likelihood terms to a power. Maybe what you want is to add a log(w_i)
to each log likelihood term instead?
📚 Documentation/Examples
Is it possible to provide an example for the loss function with sample weights? I am trying to perform causal dose-response curve estimation using the IPW (Inverse Propensity score Weighting) approach. This is a classical use of GPs.