Closed Hstellar closed 2 years ago
Hi @Hstellar
The use of the support set at test time is standard practice in the few-shot setting.
In few-shot learning, the model can access the support data (context) and use them to make the prediction on the query data. The performance of the model on the query set is used to evaluate the final accuracy.
Hope this helps.
The use of the method set_train_data()
is a call to the underlying GPytorch method, see the documentation here.
In practice this is not training the model, but just setting the base data used by the GP for inference.
Thank you for quick reply. So if I understand correctly, in testloop we write x_query instead of x_all[n] when we want to evaluate test performance in line 91.
Yes, if you are only interested in the prediction of the models for the query points then you will need to use x_query.
Then you will have to compare the prediction of the model against the true labels (y_query). The score on the query set is what matters in terms of evaluation.
Thank you so much! This solves my doubt.
While running testloop, ideally we won't have target variable or y_support[n] then why do you write self.model.set_train_data(inputs=z_support, targets=y_support[n], strict=False) in line 84. Will it not cause a target leak(or testing the same data which is in train set) while testing?