sunfanyunn / InfoGraph

Official code for "InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization" (ICLR 2020, spotlight)
https://openreview.net/forum?id=r1lfF2NYvH
311 stars 45 forks source link

About negative loss #9

Open zhikaili opened 3 years ago

zhikaili commented 3 years ago

Hi,

I find that as the training goes (beyond 20 epochs), the loss will gradually become negative. May I ask if this is harmful to downstream tasks?

Thank you!

carrotYQ commented 2 years ago

me,too!I want to know the reason and whether it is harmful to downstream tasks.

damengdameng commented 1 year ago

Hi,

I find that as the training goes (beyond 20 epochs), the loss will gradually become negative. May I ask if this is harmful to downstream tasks?

Thank you!

I suspect this may be due to the presence of structurally consistent data in the same batch, but I don't have time to verify this at the moment, so if anyone does, please let me know the results, thanks.