mrharicot / monodepth

Unsupervised single image depth prediction with CNNs
Other
2.21k stars 628 forks source link

How to simply test the model_kitti_stereo on left image and right image ? #214

Open WenguoLi opened 5 years ago

WenguoLi commented 5 years ago

Hi, Thanks for your work! I downloaded the model_kitti_stereo from your page, and modify the script monodepth_simple.py in the hope of a simple test.

32c32,33
< parser.add_argument('--image_path',       type=str,   help='path to the image', required=True)
---
> parser.add_argument('--left_image_path',  type=str,   help='path to the image', required=True)
> parser.add_argument('--right_image_path', type=str,   help='path to the image', required=True)
52c53,54
<     left  = tf.placeholder(tf.float32, [2, args.input_height, args.input_width, 3])
---
>     left  = tf.placeholder(tf.float32, [1, args.input_height, args.input_width, 3])
>     right = tf.placeholder(tf.float32, [1, args.input_height, args.input_width, 3])
55,59c57,67
<     input_image = scipy.misc.imread(args.image_path, mode="RGB")
<     original_height, original_width, num_channels = input_image.shape
<     input_image = scipy.misc.imresize(input_image, [args.input_height, args.input_width], interp='lanczos')
<     input_image = input_image.astype(np.float32) / 255
<     input_images = np.stack((input_image, np.fliplr(input_image)), 0)
---
>     left_image = scipy.misc.imread(args.left_image_path, mode="RGB")
>     original_height, original_width, num_channels = left_image.shape
>     left_image = scipy.misc.imresize(left_image, [args.input_height, args.input_width], interp='lanczos')
>     left_image = left_image.astype(np.float32) / 255
>     left_images = np.stack((left_image, np.fliplr(left_image)), 0)
> 
>     right_image = scipy.misc.imread(args.right_image_path, mode="RGB")
>     original_height, original_width, num_channels = right_image.shape
>     right_image = scipy.misc.imresize(right_image, [args.input_height, args.input_width], interp='lanczos')
>     right_image = right_image.astype(np.float32) / 255
>     right_images = np.stack((right_image, np.fliplr(right_image)), 0)
78,79c86,87
<     disp = sess.run(model.disp_left_est[0], feed_dict={left: input_images})
<     disp_pp = post_process_disparity(disp.squeeze()).astype(np.float32)
---
>     disp = sess.run(model.disp_left_est[0], feed_dict={left: left_images, right: right_images})
>     disp_pp = post_process_disparity(disp[0].squeeze()).astype(np.float32)
96,97c104,105
<         batch_size=2,
<         num_threads=1,
---
>         batch_size=1,
>         num_threads=2,
99c107
<         do_stereo=False,
---
>         do_stereo=True,

then, run by the following command ` python monodepth_simple_stereo.py --left_image_path images/org_left.jpg --right_image_path images/org_right.jpg --checkpoint_path models/model_kitti_stereo/model_kitti_stereo

` but errors occurred,

Traceback (most recent call last):
  File "/home/apuser/tf_venv/tf_py3/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 455, in _apply_op_helper
    as_ref=input_arg.is_ref)
  File "/home/apuser/tf_venv/tf_py3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1211, in internal_convert_n_to_tensor
    ctx=ctx))
  File "/home/apuser/tf_venv/tf_py3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1146, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "/home/apuser/tf_venv/tf_py3/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 229, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "/home/apuser/tf_venv/tf_py3/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 208, in constant
    value, dtype=dtype, shape=shape, verify_shape=verify_shape))
  File "/home/apuser/tf_venv/tf_py3/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 430, in make_tensor_proto
    raise ValueError("None values not supported.")
ValueError: None values not supported.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "monodepth_simple_stereo.py", line 118, in <module>
    tf.app.run()
  File "/home/apuser/tf_venv/tf_py3/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "monodepth_simple_stereo.py", line 115, in main
    test_simple(params)
  File "monodepth_simple_stereo.py", line 55, in test_simple
    model = MonodepthModel(params, "test", left, None)
  File "/home/apuser/deeplearning/tensorflow/examples/DepthEstimation/monodepth/monodepth_model.py", line 50, in __init__
    self.build_model()
  File "/home/apuser/deeplearning/tensorflow/examples/DepthEstimation/monodepth/monodepth_model.py", line 297, in build_model
    self.model_input = tf.concat([self.left, self.right], 3)
  File "/home/apuser/tf_venv/tf_py3/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 1124, in concat
    return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
  File "/home/apuser/tf_venv/tf_py3/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1033, in concat_v2
    "ConcatV2", values=values, axis=axis, name=name)
  File "/home/apuser/tf_venv/tf_py3/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 483, in _apply_op_helper
    raise TypeError("%s that don't all match." % prefix)
TypeError: Tensors in list passed to 'values' of 'ConcatV2' Op have types [float32, <NOT CONVERTIBLE TO TENSOR>] that don't all match.
petitchamp commented 5 years ago

I am having the same issue. Could you help?