ronghanghu / tensorflow_compact_bilinear_pooling

Compact Bilinear Pooling in TensorFlow
Other
141 stars 45 forks source link

When I put two same tensors into this function, the output_dim of the result is "?" #3

Open tigereatsheep opened 6 years ago

tigereatsheep commented 6 years ago

Why?

GukehAn commented 6 years ago

I met the same situation.

ronghanghu commented 6 years ago

You can explicitly set the (static) output shapes using Tensor.set_shape.

GukehAn commented 6 years ago

Here is my solution.

line 147 of compact_bilinear_pooling.py

`

output_shape = tf.add(tf.multiply(tf.shape(bottom1), [1, 1, 1, 0]),

#                       [0, 0, 0, output_dim])
# cbp = tf.reshape(cbp_flat, output_shape)

outputHeight, outputWidth = bottom1.shape.as_list()[1:3]
cbp = tf.reshape(cbp_flat, [-1, outputHeight, outputWidth, output_dim])

`

I don't know if this is correct.

JUSTDODoDo commented 5 years ago

Dear friend: Thank you for your work,I tried to call this function to reproduce the paper, but the loss (cost function) has been very large during the training, and there is no tendency to decrease. It may be due to divergence. Can you help me see what is wrong? self.cbp = compact_bilinear_pooling_layer(self.conv5_3, self.conv5_2, 16000, sum_pool=True) In the implementation process, I use Vgg 16 conv5_2, conv5_3 as the input of bottom1 and bottom2, and then pass the obtained self.cbp directly to the full-connect layer softmax classifier. But the loss of the training set and the validation set has been very large and can't converge. Can you tell me if there are some missing steps in the function process? I use random gradient descent to optimize the final prediction value and the cross entropy of the label. The batchsize is 32.

JUSTDODoDo commented 5 years ago

can you help me ?