ronghanghu / tensorflow_compact_bilinear_pooling

Compact Bilinear Pooling in TensorFlow
Other
141 stars 45 forks source link

Compilation and Usage Example? #1

Open vrama91 opened 7 years ago

vrama91 commented 7 years ago

Hi !

I am trying to use this code with TF-0.11, and have successuflly managed to run sh compile.sh, but when I run the test ( sequential_batch_fft_test.py ), I get the following error. Thanks a ton in advance for any help with this!

Caused by op u'gradients_7/SequentialBatchIFFT_2_grad/SequentialBatchFFT', defined at:
  File "sequential_batch_fft_test.py", line 130, in <module>
    test_shape()
  File "sequential_batch_fft_test.py", line 38, in test_shape
    g_ifft = tf.gradients(output_ifft, input_pl)[0]
  File "/home/vrama91/tf_gpu/local/lib/python2.7/site-packages/tensorflow/python/ops/gradients.py", line 469, in gradients
    in_grads = _AsList(grad_fn(op, *out_grads))
  File "/home/vrama91/projects/cub_captioning_classification/attention_im2txt/ops/tensorflow_compact_bilinear_pooling/sequential_fft/sequential_batch_fft_ops.py", line 38, in _SequentialBatchIFFTGrad
    return (sequential_batch_fft(grad, op.get_attr("compute_size"))
  File "<string>", line 31, in sequential_batch_fft
  File "/home/vrama91/tf_gpu/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 749, in apply_op
    op_def=op_def)
  File "/home/vrama91/tf_gpu/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2380, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/vrama91/tf_gpu/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1298, in __init__
    self._traceback = _extract_stack()

...which was originally created as op u'SequentialBatchIFFT_2', defined at:
  File "sequential_batch_fft_test.py", line 130, in <module>
    test_shape()
  File "sequential_batch_fft_test.py", line 36, in test_shape
    output_ifft = sequential_batch_ifft(input_pl)
  File "<string>", line 51, in sequential_batch_ifft
  File "/home/vrama91/tf_gpu/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 749, in apply_op
    op_def=op_def)
  File "/home/vrama91/tf_gpu/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2380, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/vrama91/tf_gpu/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1298, in __init__
    self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'SequentialBatchFFT' with these attrs.  Registered kernels:
 device='GPU'; T in [DT_COMPLEX128]
 device='GPU'; T in [DT_COMPLEX64]
JUSTDODoDo commented 5 years ago

Dear friend: Thank you for your work,I tried to call this function to reproduce the paper, but the loss (cost function) has been very large during the training, and there is no tendency to decrease. It may be due to divergence. Can you help me see what is wrong? self.cbp = compact_bilinear_pooling_layer(self.conv5_3, self.conv5_2, 16000, sum_pool=True) In the implementation process, I use Vgg 16 conv5_2, conv5_3 as the input of bottom1 and bottom2, and then pass the obtained self.cbp directly to the full-connect layer softmax classifier. But the loss of the training set and the validation set has been very large and can't converge. Can you tell me if there are some missing steps in the function process? I use random gradient descent to optimize the final prediction value and the cross entropy of the label. The batchsize is 32.

JUSTDODoDo commented 5 years ago

can you help me?

zqy1106 commented 3 years ago

Hi !

I am trying to use this code with TF-0.11, and have successuflly managed to run sh compile.sh, but when I run the test ( sequential_batch_fft_test.py ), I get the following error. Thanks a ton in advance for any help with this!

Caused by op u'gradients_7/SequentialBatchIFFT_2_grad/SequentialBatchFFT', defined at:
  File "sequential_batch_fft_test.py", line 130, in <module>
    test_shape()
  File "sequential_batch_fft_test.py", line 38, in test_shape
    g_ifft = tf.gradients(output_ifft, input_pl)[0]
  File "/home/vrama91/tf_gpu/local/lib/python2.7/site-packages/tensorflow/python/ops/gradients.py", line 469, in gradients
    in_grads = _AsList(grad_fn(op, *out_grads))
  File "/home/vrama91/projects/cub_captioning_classification/attention_im2txt/ops/tensorflow_compact_bilinear_pooling/sequential_fft/sequential_batch_fft_ops.py", line 38, in _SequentialBatchIFFTGrad
    return (sequential_batch_fft(grad, op.get_attr("compute_size"))
  File "<string>", line 31, in sequential_batch_fft
  File "/home/vrama91/tf_gpu/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 749, in apply_op
    op_def=op_def)
  File "/home/vrama91/tf_gpu/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2380, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/vrama91/tf_gpu/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1298, in __init__
    self._traceback = _extract_stack()

...which was originally created as op u'SequentialBatchIFFT_2', defined at:
  File "sequential_batch_fft_test.py", line 130, in <module>
    test_shape()
  File "sequential_batch_fft_test.py", line 36, in test_shape
    output_ifft = sequential_batch_ifft(input_pl)
  File "<string>", line 51, in sequential_batch_ifft
  File "/home/vrama91/tf_gpu/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 749, in apply_op
    op_def=op_def)
  File "/home/vrama91/tf_gpu/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2380, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/vrama91/tf_gpu/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1298, in __init__
    self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'SequentialBatchFFT' with these attrs.  Registered kernels:
 device='GPU'; T in [DT_COMPLEX128]
 device='GPU'; T in [DT_COMPLEX64]

I met the same problem. Have you solved it?