Closed RandomTuringDuck closed 4 years ago
- These are the results on the testing set.
- We remove all the duplicated keyphrases before computing the MAE and avg # keyphrases. They are the results on the testing set.
- The results of Exclusive Hierarchical Decoding for Deep Keyphrase Generation are also on the testing set, but Wang Chen may used a different way to preprocess the data in this paper. @Chen-Wang-CUHK , can you help to explain this question? Thanks.
In the ACL 2019 paper, we do not stem the source input and keyphrases when we compute the number of present and absent keyphrases. But in the ACL 2020 paper, we stem them before we compute the statistics.
- These are the results on the testing set.
- We remove all the duplicated keyphrases before computing the MAE and avg # keyphrases. They are the results on the testing set.
- The results of Exclusive Hierarchical Decoding for Deep Keyphrase Generation are also on the testing set, but Wang Chen may used a different way to preprocess the data in this paper. @Chen-Wang-CUHK , can you help to explain this question? Thanks.
非常感谢您的解答
- These are the results on the testing set.
- We remove all the duplicated keyphrases before computing the MAE and avg # keyphrases. They are the results on the testing set.
- The results of Exclusive Hierarchical Decoding for Deep Keyphrase Generation are also on the testing set, but Wang Chen may used a different way to preprocess the data in this paper. @Chen-Wang-CUHK , can you help to explain this question? Thanks.
In the ACL 2019 paper, we do not stem the source input and keyphrases when we compute the number of present and absent keyphrases. But in the ACL 2020 paper, we stem them before we compute the statistics.
原来是这样呀,我明白了,非常感谢
关于您论文里的模型生成关键词数量的实验结果,有些地方我不是很明白,想要请教一下
希望得到您的建议