WeitaoVan / L-GM-loss

Implementation of our accepted CVPR 2018 paper "Rethinking Feature Distribution for Loss Functions in Image Classification"
MIT License
172 stars 34 forks source link

Question about m_add_ #10

Closed yuyijie1995 closed 5 years ago

yuyijie1995 commented 5 years ago

I am reading your cpp code and have some question about the parameter madd, ////////////////////////////////////////// template static global void margintop(const int M, const int N_, Dtype top_data, const Dtype label, const Dtype margin_mul, const Dtype margin_add) {

CUDA_KERNEL_LOOP(i, M_) {
    const int y = (int)label[i];
    top_data[i*N_ + y] += top_data[i*N_ + y] * margin_mul - margin_add;

}

} /////////////////////////////////////// but I do not find the Initialization value of the margin_add in the cifar100 example's trainval.prototext. Can you tell me something about the parameter margin_add? And I found you do not use the margin_add parameter in your tensorflow edition. Is this parameter not matter? Look forward to your reply

WeitaoVan commented 5 years ago

The parameter 'margin_add' or 'madd' is what we have tried during research. Actually we've found that this parameter does not bring benifits. So we do not use it anymore. Still, it is in the code. We should have cleared it.