Open yuyijie1995 opened 5 years ago
This is my code: reshape_var=F.reshape(var,(1,-1,self._feature_dim)) reshape_mean=F.reshape(mean,(1,-1,self._feature_dim)) expand_data=F.expand_dims(x,1) t=expand_data-reshape_mean#(8,10,2) m_distance=F.batch_dot(t/(reshape_var+1e-8),t,transpose_b=True) index=F.array([i for i in range(self._num_class)]) distance=m_distance[:,index,index] ALPHA = F.one_hot(y, self._num_class, on_value=self._alpha, dtype='float32') K = ALPHA + F.ones((N, self._num_class), dtype='float32') logits_with_margin = distance K batch_mean = F.take(mean, y) likelihood_reg_loss = self._lamda (F.sum(F.square(x - batch_mean)) / 2.0) * (1. / N)
I am trying to add the learnable feature variances in the python code,but I found that the acc will decreased a lot. The add strategy as below: 1.I reshape the var,mean as the (1,num_classes,feature_dim),reshape the input_feature as (batchsize,1,feature_dim)