Closed December-boy closed 5 years ago
I also encountered this problem. Can you solve it?thanks a lot
Dear friend: Thank you for your work,I tried to call this function to reproduce the paper, but the loss (cost function) has been very large during the training, and there is no tendency to decrease. It may be due to divergence. Can you help me see what is wrong? self.cbp = compact_bilinear_pooling_layer(self.conv5_3, self.conv5_2, 16000, sum_pool=True) In the implementation process, I use Vgg 16 conv5_2, conv5_3 as the input of bottom1 and bottom2, and then pass the obtained self.cbp directly to the full-connect layer softmax classifier. But the loss of the training set and the validation set has been very large and can't converge. Can you tell me if there are some missing steps in the function process? I use random gradient descent to optimize the final prediction value and the cross entropy of the label. The batchsize is 32.
can you help me?
Can you solve this problem?
I‘ve tried to use the same environment (cuda8.0, tensorflow1.12.0,g++5.4.0) and reruned file ./compile.sh, then there are no more problems.
I also encountered this problem. Can you solve it?