WeitaoVan / L-GM-loss

Implementation of our accepted CVPR 2018 paper "Rethinking Feature Distribution for Loss Functions in Image Classification"
MIT License
172 stars 34 forks source link

How was the neg_sqr_dist formulated? Could you point out the equation in the paper? #18

Open kalyanainala opened 4 years ago

kalyanainala commented 4 years ago

In the tensorflow folder, the python code resnet_model.py has the following lines " XY = tf.matmul(feat, means, transpose_b=True) XX = tf.reduce_sum(tf.square(feat), axis=1, keep_dims=True) YY = tf.reduce_sum(tf.square(tf.transpose(means)), axis=0, keep_dims=True) neg_sqr_dist = -0.5 (XX - 2.0 XY + YY)
" I have read the paper but couldn't understand how the neg_sqr_dist = -0.5 (XX - 2.0 XY + YY) equation was formed. Could you point out the equation in the paper which gave -0.5 (XX - 2.0 XY + YY) ?

kalyanainala commented 4 years ago

I believe it is the equation 18 that gave the formula for neg_sqr_dist. ((a-b)2 = (a2 - 2ab + b2)) where a, b are the xi and mean respectively. equation_18 If I am right, Why was the covariance matrix not considered in the equation?