DavidZWZ / LightGODE

[CIKM 2024] Do We Really Need Graph Convolution During Training? Light Post-Training Graph-ODE for Efficient Recommendation
https://arxiv.org/abs/2407.18910
MIT License
10 stars 2 forks source link

training loss #1

Closed ZeroerWiser closed 1 month ago

ZeroerWiser commented 1 month ago

log shows that the training loss of the model in the epochs are all negative

DavidZWZ commented 1 month ago

Hi,

Thank you for your interest in LightGODE! Please note that during training, we use the DirectAU [1] loss function for optimization. It's important to note that the uniformity component of the loss may become negative, especially when users and items in the batch are more distantly related. However, as long as the overall loss continues to decrease during training, this behavior is expected and should not affect the effectiveness of LightGODE.

[1] Wang C, Yu Y, Ma W, et al. Towards representation alignment and uniformity in collaborative filtering[C]//Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining. 2022: 1816-1825.