Hi Guys,
Weirdly, I forgot the activation function(ReLU) in former implementation which has been fixed in https://github.com/pmixer/TiSASRec.pytorch/commit/c4e6230b2568fb5099b60eeccdc2723734392c60, as we know, without activation functions, the whole network would just be a complex matrix multiplication sometimes. But when I double-check the perf w/ vs w/o ReLU, just found that with same hyper parameter settings and just train more epochs, the model performs better if w/o ReLU(even better than papers' reported NDCG@10: 0.5701, HR@10: 0.8083):
Last point is as negative sampling introduces the randomness into the training and testing phases, and I did not get seeds fixed, results expected to be slightly different in your own experiments.
Hi Guys, Weirdly, I forgot the activation function(ReLU) in former implementation which has been fixed in https://github.com/pmixer/TiSASRec.pytorch/commit/c4e6230b2568fb5099b60eeccdc2723734392c60, as we know, without activation functions, the whole network would just be a complex matrix multiplication sometimes. But when I double-check the perf w/ vs w/o ReLU, just found that with same hyper parameter settings and just train more epochs, the model performs better if w/o ReLU(even better than papers' reported
NDCG@10: 0.5701, HR@10: 0.8083
):but when taking the ReLU back, I can only get:
So pls kindly redo the fix on activation function sometimes if you need better performance.
Moreover, replacing
Adam
withAdamW
as optimizer would also help you train the model bit faster(for early converge): https://github.com/pmixer/TiSASRec.pytorch/blob/e87342ead6e90898234432f7d9b86e76695008bc/main.py#L76Last point is as negative sampling introduces the randomness into the training and testing phases, and I did not get seeds fixed, results expected to be slightly different in your own experiments.
Regards, Zan