Open mujizi opened 5 years ago
I have the same issue, has anyone found a solution for that?
My current : Epoch: 150 | Loss: 2366.799846 | Mention recall: 0.729597 | Coref recall: 0.673229 | Coref precision: 0.403507
My current : Epoch: 150 | Loss: 2366.799846 | Mention recall: 0.729597 | Coref recall: 0.673229 | Coref precision: 0.403507
Hi, did you have to do any modifications to the training code to get these results?
My current : Epoch: 150 | Loss: 2366.799846 | Mention recall: 0.729597 | Coref recall: 0.673229 | Coref precision: 0.403507
Hi, did you have to do any modifications to the training code to get these results?
Yes. First change: Please check my last reply on Jun 22 in #18
My current : Epoch: 150 | Loss: 2366.799846 | Mention recall: 0.729597 | Coref recall: 0.673229 | Coref precision: 0.403507
Hi, did you have to do any modifications to the training code to get these results?
Yes. First change: Please check my last reply on Jun 22 in #18
Thank you! So I changed the loss according to your suggestion, using:
loss = torch.sum(torch.log(torch.sum(torch.mul(probs, gold_indexes), dim=1).clamp_(eps, 1-eps)), dim=0) * -1
But model still does not converge, did you do any other changes besides that?
My current : Epoch: 150 | Loss: 2366.799846 | Mention recall: 0.729597 | Coref recall: 0.673229 | Coref precision: 0.403507
Hi, did you have to do any modifications to the training code to get these results?
Yes. First change: Please check my last reply on Jun 22 in #18
Thank you! So I changed the loss according to your suggestion, using:
loss = torch.sum(torch.log(torch.sum(torch.mul(probs, gold_indexes), dim=1).clamp_(eps, 1-eps)), dim=0) * -1
But model still does not converge, did you do any other changes besides that?
I also changed one line in coref.py file as suggested in issue #10 to handle index out of issue : self.train_corpus = [doc for doc in self.train_corpus if doc.sents] in def train_epoch(self, epoch):
Well, I modified these and finished train and evaluation but my result is poor as follows:
Epoch: 150 | Loss: 2832.548317 | Mention recall: 0.067340 | Coref recall: 0.024316 | Coref precision: 0.020000
.
So did you sovle it?
My current : Epoch: 150 | Loss: 2366.799846 | Mention recall: 0.729597 | Coref recall: 0.673229 | Coref precision: 0.403507
Hi, did you have to do any modifications to the training code to get these results?
Yes. First change: Please check my last reply on Jun 22 in #18
Thank you! So I changed the loss according to your suggestion, using:
loss = torch.sum(torch.log(torch.sum(torch.mul(probs, gold_indexes), dim=1).clamp_(eps, 1-eps)), dim=0) * -1
But model still does not converge, did you do any other changes besides that?I also changed one line in coref.py file as suggested in issue #10 to handle index out of issue : self.train_corpus = [doc for doc in self.train_corpus if doc.sents] in def train_epoch(self, epoch):
My current : Epoch: 150 | Loss: 2366.799846 | Mention recall: 0.729597 | Coref recall: 0.673229 | Coref precision: 0.403507
Hi, did you have to do any modifications to the training code to get these results?
Yes. First change: Please check my last reply on Jun 22 in #18
Thank you! So I changed the loss according to your suggestion, using:
loss = torch.sum(torch.log(torch.sum(torch.mul(probs, gold_indexes), dim=1).clamp_(eps, 1-eps)), dim=0) * -1
But model still does not converge, did you do any other changes besides that?
Epoch: 150 | Loss: 2649.815816 | Mention recall: 0.054297 | Coref recall: 0.003106 | Coref precision: 0.000000