Open ZhouYiiFeng opened 3 years ago
It is just because there is no need to calculate the Jacobian. Calculating Jacobian is to get the accurate log-likelihood. However, in our task, we don't care about the log-likelihood of z. Setting it false can speed up the training.
Thanks for your reply, I still have some other questions, would you like to add my wechat? My wechat ID is: wx_joeyf
Sure.
From: JoeyF Zhou @.> Sent: Friday, 9 July 2021 4:57 PM To: Yang-Liu1082/InvDN @.> Cc: Yang Liu @.>; Comment @.> Subject: Re: [Yang-Liu1082/InvDN] Sth about the Loss Function (#9)
Thanks for your reply, I still have some other questions, would you like to add my wechat? My wechat ID is: wx_joeyf
— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/Yang-Liu1082/InvDN/issues/9#issuecomment-876961556, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ATMHHUDOSSDBJSZ35SOWJ33TW2MVLANCNFSM47T4TWKA.
Hi,
I notice that this model is based on the "flow-based generative model", however, the loss does not include the logdet loss. The
cal_jacobian
is default set to False. Is this cause by you replaced some features of the forward output by the normal distribution?I‘m new in learning the "flow-based generative model", maybe I got it wrong, could you help me? Thx!