Open lygsbw opened 3 years ago
Hi,
The 'low_order' and 'high_order' in 'sim_loss' represents multi-scale representations with different pooling scale. Currently, we do not include the cosine similarity calculation. It can be implemented in a very simple way:
import torch.nn.functional as F
def abs_cos(x, y):
return (F.normalize(x) @ F.normalize(y).transpose(-2, -1)).abs().mean()
Dear Authors, Thanks for your nice work. I also have some questions about the similarity function part. If I understand right, this functions correspond to the patch-wise contrastive loss described in your paper. However, the code only considers one positive pairs against two negative pairs calculated from multi-scale representations with different pooling scale. This is different from the loss function described in your paper. My question is: is this because this kind operation can have better performance in practice or other reasons? Many thanks.
Dear Authors, As @xwan6266 correctly pointed out, the similarity loss function mentioned in the paper is different from the one in your implementation. Is there some reason behind that? Can you please explain? @ChengyueGongR
Hello, I want to know what the 'low_order' and 'high_order' represent in the 'similarity' function in models.py and how to set the 'high_k'?
I also wonder which parts of the codes present the L_cos in the paper?
Thank you very much!