Closed z-huabao closed 5 years ago
If you use einsum, then you don't need to flatten/reshape the multidimensional array to carryout the matrix multiplication operations.
https://stackoverflow.com/questions/26089893/understanding-numpys-einsum
https://obilaniu6266h16.wordpress.com/2016/02/04/einstein-summation-in-numpy/
@abhaydoke09 Thank you very much for your answer! einsum is really useful!
hello, I still have a problem. After running the second part of the whole model, I will finish training. It seems that the final model is not saved in the code. Why is this done in the absence of the training model? Can you give me some details?
Hi, I'm confuse with the differents between the code
self.phi_I = tf.einsum('ijkm,ijkn->imn',self.conv5_3,self.conv5_3)
andself.conv5_3 = tf.transpose(self.conv5_3, perm=[0,3,1,2])
self.conv5_3 = tf.reshape(self.conv5_3,[-1,512,784])
conv5_3_T = tf.transpose(self.conv5_3, perm=[0,2,1])
self.phi_I = tf.matmul(self.conv5_3, conv5_3_T)
which in bcnn_DD_woft.py and bcnn_finetuning.py respectively.They look like the same operation and should be the same operation I think.