zimenglan-sysu-512 / Focal-Loss

loss layer of implementation
125 stars 41 forks source link

caffe_gpu_scal(count, alpha_, power_prob_data); #5

Open xuanyuyt opened 7 years ago

xuanyuyt commented 7 years ago

Why the alpha is fixed, in paper: if sample is a positive sample, multiply alpha ; else multiply (1-alpha_ )

zimenglan-sysu-512 commented 7 years ago

hi @xuanyuyt, in paper, it's binary. here i just use softmax, and all classes share the same alpha.

chinabing commented 6 years ago

@zimenglan-sysu-512 if you use the same alpha, it could be delete. is there any difference ?