Lavender105 / DFF

Code for Dynamic Feature Fusion for Semantic Edge Detection https://arxiv.org/abs/1902.09104
MIT License
220 stars 51 forks source link

Question about your loss function #9

Open lijing1996 opened 4 years ago

lijing1996 commented 4 years ago

Since the input of your loss function is the network output without sigmoid function, I find it difficult to understand your code. What are 'max val' and 'log weight' here? Is it different from the loss function in CASENet? Could you give me a brief explanation? Thanks a lot.

https://github.com/Lavender105/DFF/blob/b215d3c960ca6fcea7adffabfb34b60c3e5190bc/exps/losses/customize.py#L16-L43

BrandonHanx commented 3 years ago

Hi, @lijing1996
Any idea about this? I face the same question. The loss function is too hard to understand for me...

BrandonHanx commented 3 years ago

Hi, @lijing1996 I guess this code clip was written from torch.nn.functional.binary_cross_entropy_with_logits, which takes advantage of the log-sum-exp trick for numerical stability.

maopal commented 2 years ago

Hi all, in this Loss Function do you guys know what the pad_mask variable is? I thought the target would have one channel per Class. Whereas this functions suggests a pad_mask channel as well.

Thank you!