Open JunLiangZ opened 1 year ago
How to solve this problem?
Hey, I've met the same problem. Did you solve it?
How to solve this problem?
Hey, I've met the same problem. Did you solve it?
It seems the input dimensional dismatch of the call function of class nn.CosineEmbeddingLoss()
.
In the file trains/singleTask/DMD.py
, replace the code
cosine_similarity_s_c_l = self.cosine(output['s_l'], output['c_l'], torch.tensor([-1]).cuda()).mean(0)
cosine_similarity_s_c_v = self.cosine(output['s_v'], output['c_v'], torch.tensor([-1]).cuda()).mean(0)
cosine_similarity_s_c_a = self.cosine(output['s_a'], output['c_a'], torch.tensor([-1]).cuda()).mean(0)
with
cosine_similarity_s_c_l = self.cosine(output['s_l'].reshape(-1, 50), output['c_l'].reshape(-1, 50), torch.tensor([-1]).cuda())
cosine_similarity_s_c_v = self.cosine(output['s_v'].reshape(-1, 50), output['c_v'].reshape(-1, 50), torch.tensor([-1]).cuda())
cosine_similarity_s_c_a = self.cosine(output['s_a'].reshape(-1, 50), output['c_a'].reshape(-1, 50), torch.tensor([-1]).cuda())
How to solve this problem?
Hey, I've met the same problem. Did you solve it?
It seems the input dimensional dismatch of the call function of class
nn.CosineEmbeddingLoss()
.In the file
trains/singleTask/DMD.py
, replace the codecosine_similarity_s_c_l = self.cosine(output['s_l'], output['c_l'], torch.tensor([-1]).cuda()).mean(0) cosine_similarity_s_c_v = self.cosine(output['s_v'], output['c_v'], torch.tensor([-1]).cuda()).mean(0) cosine_similarity_s_c_a = self.cosine(output['s_a'], output['c_a'], torch.tensor([-1]).cuda()).mean(0)
with
cosine_similarity_s_c_l = self.cosine(output['s_l'].reshape(-1, 50), output['c_l'].reshape(-1, 50), torch.tensor([-1]).cuda()) cosine_similarity_s_c_v = self.cosine(output['s_v'].reshape(-1, 50), output['c_v'].reshape(-1, 50), torch.tensor([-1]).cuda()) cosine_similarity_s_c_a = self.cosine(output['s_a'].reshape(-1, 50), output['c_a'].reshape(-1, 50), torch.tensor([-1]).cuda())
Hey, Is your solution to do cosine similarity on word Embeddings? If my code looks like this, is cosine similarity calculated at the sentence level?
cosine_similarity_s_c_l = self.cosine(output['s_l'].permute(1,0,2).contiguous().view(output['s_l'].size(1), -1),
output['c_l'].permute(1,0,2).contiguous().view(output['c_l'].size(1), -1),
torch.tensor([-1]).cuda()).mean(0)
cosine_similarity_s_c_v = self.cosine(output['s_v'].permute(1,0,2).contiguous().view(output['s_l'].size(1), -1),
output['c_v'].permute(1,0,2).contiguous().view(output['c_l'].size(1), -1),
torch.tensor([-1]).cuda()).mean(0)
cosine_similarity_s_c_a = self.cosine(output['s_a'].permute(1,0,2).contiguous().view(output['s_l'].size(1), -1),
output['c_a'].permute(1,0,2).contiguous().view(output['c_l'].size(1), -1),
torch.tensor([-1]).cuda()).mean(0)
我是在文件trains/singleTask/DMD.py中,替换代码 cosine_similarity_s_c_l = self.cosine(output['s_l'], output['c_l'], torch.tensor([-1]).cuda()).mean(0) cosine_similarity_s_c_v = self.cosine(output['s_v'], output['c_v'], torch.tensor([-1]).cuda()).mean(0) cosine_similarity_s_c_a = self.cosine(output['s_a'], output['c_a'], torch.tensor([-1]).cuda()).mean(0) 成 cosine_similarity_s_c_l = self.cosine(output['s_l'].contiguous().view(labels.size(0),-1), output['c_l'].contiguous().view(labels.size(0),-1), torch.tensor([-1]).cuda()).mean(0) cosine_similarity_s_c_v = self.cosine(output['s_v'].contiguous().view(labels.size(0),-1), output['c_v'].contiguous().view(labels.size(0),-1), torch.tensor([-1]).cuda()).mean(0) cosine_similarity_s_c_a = self.cosine(output['s_a'].contiguous().view(labels.size(0),-1), output['c_a'].contiguous().view(labels.size(0),-1), torch.tensor([-1]).cuda()).mean(0)
Hello, I encountered the same issue. I found the author's response in an old issue. It might be helpful to check the link below.
https://github.com/mdswyz/DMD/issues/5#issuecomment-1633787519
How to solve this problem?