Closed huntzhan closed 5 years ago
So...does that mean I need to manually replace this line?
Yes, just replace nn.CrossEntropyLoss
with CrossEntropyLoss_LSR
.
I think it would be great if you could provide the actual commands (random seeds, hyperparameters, etc.) for experiments.
Among the models implemented, a small part of them provide complete set of hyperparameters, while others do not. So this repo mainly focuses on using a unified code structure to implement all these models, which may sacrifice some consistency in detail. Usually, with default parameters, we can get results close to those in papers.
Because the public datasets of ABSA are very small and there is no official segmentation of devset and testset , the experiment results may fluctuate in a certain range. #32 provides a more appropriate way of comparision.
thank you.
Hi,
Thanks for the great work. Just notice that the
CrossEntropyLoss_LSR
is not referenced at all. So...does that mean I need to manually replace this line? https://github.com/songyouwei/ABSA-PyTorch/blob/e4da01e1fa1fa57b5bace4cc209a45a3a73521ab/train.py#L152BTW, I notice there are some reproducible issues as mentioned in #38. I think it would be great if you could provide the actual commands (random seeds, hyperparameters, etc.) for experiments.