Open Stomach-ache opened 4 years ago
update:
epoch = 500 batch_size = 64 P@1 ~ 72%
update:
epoch = 500 batch_size = 64 P@1 ~ 72%
Sorry for the late reply. Thanks for your update. There are still some modifications that I need to do to exactly reproduce the results mentioned in the paper.
Thanks, Siddhant
Hi Siddhant,
I’ve been working on this for months lol... Currently, P@k could be reproduced, but PSP@k are lower than numbers in the paper by a large margin on EUR-Lex dataset.
Best, Tong
Hi Siddhant,
I’ve been working on this for months lol... Currently, P@k could be reproduced, but PSP@k are lower than numbers in the paper by a large margin on EUR-Lex dataset.
Best, Tong
Hi Tong,
I am not sure what might be going wrong, I did not test for PSP@K, try to look at the supplementary of the paper for the PSP metric to verify with your implementation. Also check if the Glas Regularizer is causing a noticeable difference in the results or not.
Thanks, Siddhant
Hi Siddhant,
Thanks for your suggestions and I have also tried them out.
Hi Siddhant,
Thanks for your suggestions and I have also tried them out.
- In terms of their implementation of PSP@k, I think it is different from the one by AttentionXML which I am using to reproduce the results;
- For the effect of Glas regularizer, I have tried different values of hyparameter \lambda, e.g., 0, 10, 100. It shows that large value of \lambda does not help boost the performance, which typically hurts the performance.
Hi Tong, The paper doesn't describe in detail about the implementation which makes it hard to replicate the results. For instance, they never talked about how do they perform the sampling in the batch so that the label matrix Y doesn't become low rank. I will push a report to the repo which might help you in some of the details of the implementation.
Thanks, Siddhant
I ran your code on Eurlex-4K dataset using default settings and got around 10% P@1....