Closed EJHyun closed 2 years ago
Yes they are the same. Maximizing the negative of the loss is equivalent to minimizing the loss.
On Sep 14, 2021, at 10:05 AM, Noc_Up @.**@.>> wrote:
Hi I'm curious about your loss because when i read your paper, the loss's shape is like
but when i printed your loss in TKG_Module.py, like
def training_step(self, batch_time, batch_idx):
loss = self.forward(batch_time) print("\n",loss.item())
and i see the loss values are all positive values. is the loss your model using for training exactly same with the paper???
(I'm just a student. I'm just curious about it so please don't feel bad about the question)
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/JiapengWu/TeMP/issues/12, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AFN2LV2XKCZ4MY57YZUIN4TUB5JBPANCNFSM5EAHVGVQ. Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
Hi I'm curious about your loss because when i read your paper, the loss's shape is like
but when i printed your loss in TKG_Module.py, like
def training_step(self, batch_time, batch_idx):
gc.collect()
and i see the loss values are all positive values. is the loss your model using for training exactly same with the paper???
(I'm just a student. I'm just curious about it so please don't feel bad about the question)