dBeker / Faster-RCNN-TensorFlow-Python3

Tensorflow Faster R-CNN for Windows/Linux and Python 3 (3.5/3.6/3.7)
MIT License
609 stars 329 forks source link

After training my own data set successfully, you run demo.py with the following problems #71

Closed Jai-wei closed 5 years ago

Jai-wei commented 5 years ago

(py35GPU) F:\desktop\shujudata\GPUFaster-RCNN-TensorFlow-Python3.5-master>python35 demo.py 2019-05-20 17:01:29.627866: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2 2019-05-20 17:01:30.098738: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 0 with properties: name: GeForce GTX 1060 major: 6 minor: 1 memoryClockRate(GHz): 1.6705 pciBusID: 0000:01:00.0 totalMemory: 3.00GiB freeMemory: 2.42GiB 2019-05-20 17:01:30.186337: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0 2019-05-20 17:01:39.729633: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-05-20 17:01:39.738410: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0 2019-05-20 17:01:39.742103: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0: N 2019-05-20 17:01:39.767422: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2117 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060, pci bus id: 0000:01:00.0, compute capability: 6.1) WARNING:tensorflow:From F:\desktop\shujudata\GPUFaster-RCNN-TensorFlow-Python3.5-master\lib\nets\network.py:57: calling expand_dims (from tensorflow.python.ops.array_ops) with dim is deprecated and will be removed in a future version. Instructions for updating: Use the axis argument instead Traceback (most recent call last): File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\client\session.py", line 1278, in _do_call return fn(*args) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\client\session.py", line 1263, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\client\session.py", line 1350, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [4096,84] rhs shape= [4096,20] [[Node: save/Assign_1 = Assign[T=DT_FLOAT, _class=["loc:@vgg_16/bbox_pred/weights"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](vgg_16/bbox_pred/weights, save/RestoreV2/_3)]] [[Node: save/RestoreV2/_60 = _SendT=DT_FLOAT, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_66_save/RestoreV2", _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\training\saver.py", line 1725, in restore {self.saver_def.filename_tensor_name: save_path}) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\client\session.py", line 877, in run run_metadata_ptr) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\client\session.py", line 1100, in _run feed_dict_tensor, options, run_metadata) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\client\session.py", line 1272, in _do_run run_metadata) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\client\session.py", line 1291, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [4096,84] rhs shape= [4096,20] [[Node: save/Assign_1 = Assign[T=DT_FLOAT, _class=["loc:@vgg_16/bbox_pred/weights"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](vgg_16/bbox_pred/weights, save/RestoreV2/_3)]] [[Node: save/RestoreV2/_60 = _SendT=DT_FLOAT, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_66_save/RestoreV2", _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

Caused by op 'save/Assign_1', defined at: File "demo.py", line 143, in saver = tf.train.Saver() File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\training\saver.py", line 1281, in init self.build() File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\training\saver.py", line 1293, in build self._build(self._filename, build_save=True, build_restore=True) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\training\saver.py", line 1330, in _build build_save=build_save, build_restore=build_restore) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\training\saver.py", line 778, in _build_internal restore_sequentially, reshape) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\training\saver.py", line 419, in _AddRestoreOps assign_ops.append(saveable.restore(saveable_tensors, shapes)) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\training\saver.py", line 112, in restore self.op.get_shape().is_fully_defined()) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\ops\state_ops.py", line 216, in assign validate_shape=validate_shape) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\ops\gen_state_ops.py", line 60, in assign use_locking=use_locking, name=name) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\util\deprecation.py", line 454, in new_func return func(*args, **kwargs) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\framework\ops.py", line 3155, in create_op op_def=op_def) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\framework\ops.py", line 1717, in init self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [4096,84] rhs shape= [4096,20] [[Node: save/Assign_1 = Assign[T=DT_FLOAT, _class=["loc:@vgg_16/bbox_pred/weights"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](vgg_16/bbox_pred/weights, save/RestoreV2/_3)]] [[Node: save/RestoreV2/_60 = _SendT=DT_FLOAT, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_66_save/RestoreV2", _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "demo.py", line 144, in saver.restore(sess, tfmodel) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\training\saver.py", line 1759, in restore err, "a mismatch between the current graph and the graph") tensorflow.python.framework.errors_impl.InvalidArgumentError: Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Assign requires shapes of both tensors to match. lhs shape= [4096,84] rhs shape= [4096,20] [[Node: save/Assign_1 = Assign[T=DT_FLOAT, _class=["loc:@vgg_16/bbox_pred/weights"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](vgg_16/bbox_pred/weights, save/RestoreV2/_3)]] [[Node: save/RestoreV2/_60 = _SendT=DT_FLOAT, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_66_save/RestoreV2", _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

Caused by op 'save/Assign_1', defined at: File "demo.py", line 143, in saver = tf.train.Saver() File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\training\saver.py", line 1281, in init self.build() File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\training\saver.py", line 1293, in build self._build(self._filename, build_save=True, build_restore=True) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\training\saver.py", line 1330, in _build build_save=build_save, build_restore=build_restore) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\training\saver.py", line 778, in _build_internal restore_sequentially, reshape) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\training\saver.py", line 419, in _AddRestoreOps assign_ops.append(saveable.restore(saveable_tensors, shapes)) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\training\saver.py", line 112, in restore self.op.get_shape().is_fully_defined()) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\ops\state_ops.py", line 216, in assign validate_shape=validate_shape) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\ops\gen_state_ops.py", line 60, in assign use_locking=use_locking, name=name) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\util\deprecation.py", line 454, in new_func return func(*args, **kwargs) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\framework\ops.py", line 3155, in create_op op_def=op_def) File "E:\anaconda\envs\py35GPU\lib\site-packages\tensorflow\python\framework\ops.py", line 1717, in init self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Assign requires shapes of both tensors to match. lhs shape= [4096,84] rhs shape= [4096,20] [[Node: save/Assign_1 = Assign[T=DT_FLOAT, _class=["loc:@vgg_16/bbox_pred/weights"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](vgg_16/bbox_pred/weights, save/RestoreV2/_3)]] [[Node: save/RestoreV2/_60 = _SendT=DT_FLOAT, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_66_save/RestoreV2", _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

morpheusthewhite commented 5 years ago

This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Assign requires shapes of both tensors to match. lhs shape= [4096,84] rhs shape= [4096,20]

As written in the output, there is a mismatch between the model you are trying to restore and the one you saved, you should probably change the 21 in the following line(the same that causes the error)

    net.create_architecture(sess, "TEST", 21,
                            tag='default', anchor_scales=[8, 16, 32])
morpheusthewhite commented 5 years ago

I opened a pull request that clarifies and improve the code

dBeker commented 5 years ago

Resolved in relevant PR.