Closed tian0810 closed 4 years ago
Hi,
The contrastive loss acts as a kind of regularizer only during meta-training, where we make the set-to-set transformation more discriminative. Only the support set instances are input into the set-to-set transformation during the meta-test.
Hello,I am very interested in your paper. However,When I try to undersatnd contrastive learning in your paper and regularization in your code,I get confused. You construct the prototype by using support and query in the train set,but I think we can only use the support to construct the prototype,because we don't know the label of the query. So could you answer my confusion,thanks!