marcellacornia / mlnet

A Deep Multi-Level Network for Saliency Prediction. ICPR 2016
MIT License
94 stars 37 forks source link

A little confused about the loss function #8

Closed Time1ess closed 7 years ago

Time1ess commented 7 years ago

Hi, I have some questions about your way implementing the loss function based on your paper. According to your paper, the deviation between predicted values and ground-truth values is weighted by a linear function alpha - yi, then square it to donate the error. But in your code K.mean(K.square((y_pred / max_y) - y_true) / (1 - y_true + 0.1)), you just square the numerator not the whole fraction. And also the L2 regularization term you mentioned in your paper is not in the implementation of the loss function. Am I not fully understood?

marcellacornia commented 7 years ago

Hi, thanks for your interest in our work. The correct implementation of our loss function is that reported in our source code. We made a little mistake by squaring the whole fraction in our paper. The L2 regularization term, instead, is added as weight regularizer in the EltWise Product Layer, as you can see in the model.py file.

Time1ess commented 7 years ago

Thanks, it makes sense to me now.