Open tigereatsheep opened 6 years ago
I met the same situation.
You can explicitly set the (static) output shapes using Tensor.set_shape
.
Here is my solution.
line 147 of compact_bilinear_pooling.py
`
# [0, 0, 0, output_dim])
# cbp = tf.reshape(cbp_flat, output_shape)
outputHeight, outputWidth = bottom1.shape.as_list()[1:3]
cbp = tf.reshape(cbp_flat, [-1, outputHeight, outputWidth, output_dim])
`
I don't know if this is correct.
Dear friend: Thank you for your work,I tried to call this function to reproduce the paper, but the loss (cost function) has been very large during the training, and there is no tendency to decrease. It may be due to divergence. Can you help me see what is wrong? self.cbp = compact_bilinear_pooling_layer(self.conv5_3, self.conv5_2, 16000, sum_pool=True) In the implementation process, I use Vgg 16 conv5_2, conv5_3 as the input of bottom1 and bottom2, and then pass the obtained self.cbp directly to the full-connect layer softmax classifier. But the loss of the training set and the validation set has been very large and can't converge. Can you tell me if there are some missing steps in the function process? I use random gradient descent to optimize the final prediction value and the cross entropy of the label. The batchsize is 32.
can you help me ?
Why?