BayesWatch / deep-kernel-transfer

Official pytorch implementation of the paper "Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels" (NeurIPS 2020)
https://arxiv.org/abs/1910.05199
201 stars 29 forks source link

Why there's a need to set_train_data() in DKT_regression.py while testing #16

Closed Hstellar closed 2 years ago

Hstellar commented 2 years ago

While running testloop, ideally we won't have target variable or y_support[n] then why do you write self.model.set_train_data(inputs=z_support, targets=y_support[n], strict=False) in line 84. Will it not cause a target leak(or testing the same data which is in train set) while testing?

image

mpatacchiola commented 2 years ago

Hi @Hstellar

The use of the support set at test time is standard practice in the few-shot setting.

In few-shot learning, the model can access the support data (context) and use them to make the prediction on the query data. The performance of the model on the query set is used to evaluate the final accuracy.

Hope this helps.

mpatacchiola commented 2 years ago

The use of the method set_train_data() is a call to the underlying GPytorch method, see the documentation here.

In practice this is not training the model, but just setting the base data used by the GP for inference.

Hstellar commented 2 years ago

Thank you for quick reply. So if I understand correctly, in testloop we write x_query instead of x_all[n] when we want to evaluate test performance in line 91.

image

mpatacchiola commented 2 years ago

Yes, if you are only interested in the prediction of the models for the query points then you will need to use x_query.

Then you will have to compare the prediction of the model against the true labels (y_query). The score on the query set is what matters in terms of evaluation.

Hstellar commented 2 years ago

Thank you so much! This solves my doubt.