Closed jharrang closed 2 years ago
hi @jharrang
sorry for a late reply.
hoestly it has been a while that I don't remember the exact details. But I am sure that the code we used in this repo is the code we used to get the reported results. Historically, the data pipeline is taken from this repo (https://github.com/codeKgu/Text-GCN).
and sorry I am not sure why did we get better results for their method ...
For your 20NG results reported in your original paper (
88.5 ± 0.1
), was the model trained on the full public 20NGtrain
set, of size 11314, or were the reported results generated using the code currently in this repo, which appears to exclude the validation set from the data used to train the model? (The latter would use a training set of size 10183).Also, I notice that in Table 4 you reported a different result (
87.9 ± 0.2
) for the cited Yao GCN paper than they did themselves (0.8634 ± 0.0009
). Do you know why that is the case?Thanks!