CVLAB-Unibo / Unsupervised-Adaptation-for-Deep-Stereo

Code for "Unsupervised Adaptation for Deep Stereo" - ICCV17
Apache License 2.0
62 stars 22 forks source link

input and filter must have the same depth: 3 vs 1 #5

Closed TrackingBird closed 6 years ago

TrackingBird commented 6 years ago

Dear authors, thanks for the great work. I'm wondering may you provide the adcensus, SGM and CCNN code if possible? To get the confidence map of disparity, I use the code from the author homepage https://github.com/fabiotosi92/CCNN-Tensorflow, it get perfect performance when test its own data. While I get the error when I use my disparity map by SGM and adcensus. The error is as follows:

tensorflow.python.framework.errors_impl.InvalidArgumentError: input and filter must have the same depth: 3 vs 1

[*] Testing....
 [*] Load model: SUCCESS
uint8uint8uint8uint8uint8hahahahahaha
 [*] Start Testing...
num_samples: [ 4]
 [*] Test image:./images/disparity/ad-census/81.png
hahahahahhahhaconfidenceconfidence
2018-04-24 09:11:43.337912: W tensorflow/core/kernels/queue_base.cc:294] _0_input_producer: Skipping cancelled enqueue attempt with queue not closed
Traceback (most recent call last):
  File "./model/main.py", line 47, in <module>
    tf.app.run()
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "./model/main.py", line 43, in main
    model.test(args)
  File "/media/jennifer/Papers/3_Codes_now/CCNN-Tensorflow/model/model.py", line 174, in test
    confidence = self.sess.run(png,feed_dict={self.disp: batch})
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 789, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 997, in _run
    feed_dict_string, options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1132, in _do_run
    target_list, options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1152, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: input and filter must have the same depth: 3 vs 1
     [[Node: CCNN/conv1/conv/Conv2D = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/gpu:0"](_arg_div_0_0/_71, CCNN/conv1/weights/read)]]

Caused by op u'CCNN/conv1/conv/Conv2D', defined at:
  File "./model/main.py", line 47, in <module>
    tf.app.run()
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "./model/main.py", line 37, in main
    model_name=args.model_name
  File "/media/jennifer/Papers/3_Codes_now/CCNN-Tensorflow/model/model.py", line 22, in __init__
    self.build_CCNN()
  File "/media/jennifer/Papers/3_Codes_now/CCNN-Tensorflow/model/model.py", line 45, in build_CCNN
    self.conv1 = ops.conv2d(self.disp, [kernel_size, kernel_size, 1, filters], 1, True, padding='VALID')
  File "/media/jennifer/Papers/3_Codes_now/CCNN-Tensorflow/model/ops.py", line 10, in conv2d
    x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding=padding)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 399, in conv2d
    data_format=data_format, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2506, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1269, in __init__
    self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): input and filter must have the same depth: 3 vs 1
     [[Node: CCNN/conv1/conv/Conv2D = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/gpu:0"](_arg_div_0_0/_71, CCNN/conv1/weights/read)]]

Have you ever met the same error? hope for your help. Thanks.

mattpoggi commented 6 years ago

It seems you are providing 3 channels disparity maps to the network, while it works on single channel inputs. You can either convert them to grayscale offline or add tf.image.rgb_to_grayscale before the first convolutional layer in CCNN source code.

TrackingBird commented 6 years ago

@mattpoggi Thanks, you are right. I have solve this issue by changing the dataloader.py image = tf.image.decode_png(image_raw, channels=1, dtype=tf.uint8)
~_~