charlesq34 / frustum-pointnets

Frustum PointNets for 3D Object Detection from RGB-D Data
Apache License 2.0
1.57k stars 538 forks source link

test problem on windows #135

Open YitingJunhao opened 1 year ago

YitingJunhao commented 1 year ago

2023-03-08 14:52:55.933056: W tensorflow/core/common_runtime/colocation_graph.cc:1139] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_nameindex=-1 requested_devicename='/device:GPU:0' assigned_devicename='' resource_devicename='/device:GPU:0' supported_devicetypes=[CPU] possibledevices=[] Const: CPU VarHandleOp: CPU AssignVariableOp: CPU VarIsInitializedOp: CPU ReadVariableOp: CPU Mul: CPU Switch: CPU Sub: CPU AssignSubVariableOp: CPU

Colocation members, user-requested devices, and framework assigned devices, if any: fc2/BatchNorm/moving_mean/Initializer/zeros (Const) fc2/BatchNorm/moving_mean (VarHandleOp) /device:GPU:0 fc2/BatchNorm/moving_mean/IsInitialized/VarIsInitializedOp (VarIsInitializedOp) /device:GPU:0 fc2/BatchNorm/moving_mean/Assign (AssignVariableOp) /device:GPU:0 fc2/BatchNorm/moving_mean/Read/ReadVariableOp (ReadVariableOp) /device:GPU:0 fc2/BatchNorm/cond/ReadVariableOp/Switch (Switch) /device:GPU:0 fc2/BatchNorm/cond/ReadVariableOp (ReadVariableOp) fc2/BatchNorm/cond_2/AssignMovingAvg/decay (Const) /device:GPU:0 fc2/BatchNorm/cond_2/AssignMovingAvg/ReadVariableOp/Switch (Switch) /device:GPU:0 fc2/BatchNorm/cond_2/AssignMovingAvg/ReadVariableOp (ReadVariableOp) fc2/BatchNorm/cond_2/AssignMovingAvg/sub (Sub) /device:GPU:0 fc2/BatchNorm/cond_2/AssignMovingAvg/mul (Mul) /device:GPU:0 fc2/BatchNorm/cond_2/AssignMovingAvg/AssignSubVariableOp (AssignSubVariableOp) /device:GPU:0 fc2/BatchNorm/cond_2/AssignMovingAvg/ReadVariableOp_1 (ReadVariableOp) /device:GPU:0 fc2/BatchNorm/cond_2/ReadVariableOp (ReadVariableOp) /device:GPU:0 fc2/BatchNorm/cond_2/ReadVariableOp_1/Switch (Switch) /device:GPU:0 fc2/BatchNorm/cond_2/ReadVariableOp_1 (ReadVariableOp) save/AssignVariableOp_78 (AssignVariableOp) /device:GPU:0

2023-03-08 14:52:55.933257: W tensorflow/core/common_runtime/colocation_graph.cc:1139] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_nameindex=-1 requested_devicename='/device:GPU:0' assigned_devicename='' resource_devicename='/device:GPU:0' supported_devicetypes=[CPU] possibledevices=[] Const: CPU VarHandleOp: CPU AssignVariableOp: CPU VarIsInitializedOp: CPU ReadVariableOp: CPU Mul: CPU Switch: CPU Sub: CPU AssignSubVariableOp: CPU

Colocation members, user-requested devices, and framework assigned devices, if any: fc2/BatchNorm/moving_variance/Initializer/ones (Const) fc2/BatchNorm/moving_variance (VarHandleOp) /device:GPU:0 fc2/BatchNorm/moving_variance/IsInitialized/VarIsInitializedOp (VarIsInitializedOp) /device:GPU:0 fc2/BatchNorm/moving_variance/Assign (AssignVariableOp) /device:GPU:0 fc2/BatchNorm/moving_variance/Read/ReadVariableOp (ReadVariableOp) /device:GPU:0 fc2/BatchNorm/cond_1/ReadVariableOp/Switch (Switch) /device:GPU:0 fc2/BatchNorm/cond_1/ReadVariableOp (ReadVariableOp) fc2/BatchNorm/cond_3/AssignMovingAvg/decay (Const) /device:GPU:0 fc2/BatchNorm/cond_3/AssignMovingAvg/ReadVariableOp/Switch (Switch) /device:GPU:0 fc2/BatchNorm/cond_3/AssignMovingAvg/ReadVariableOp (ReadVariableOp) fc2/BatchNorm/cond_3/AssignMovingAvg/sub (Sub) /device:GPU:0 fc2/BatchNorm/cond_3/AssignMovingAvg/mul (Mul) /device:GPU:0 fc2/BatchNorm/cond_3/AssignMovingAvg/AssignSubVariableOp (AssignSubVariableOp) /device:GPU:0 fc2/BatchNorm/cond_3/AssignMovingAvg/ReadVariableOp_1 (ReadVariableOp) /device:GPU:0 fc2/BatchNorm/cond_3/ReadVariableOp (ReadVariableOp) /device:GPU:0 fc2/BatchNorm/cond_3/ReadVariableOp_1/Switch (Switch) /device:GPU:0 fc2/BatchNorm/cond_3/ReadVariableOp_1 (ReadVariableOp) save/AssignVariableOp_79 (AssignVariableOp) /device:GPU:0

2023-03-08 14:52:55.974152: W tensorflow/core/framework/op_kernel.cc:1767] OP_REQUIRES failed at save_restore_v2_ops.cc:184 : Not found: Key conv-reg1-stage1/BatchNorm/beta not found in checkpoint Traceback (most recent call last): File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\client\session.py", line 1365, in _do_call return fn(*args) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\client\session.py", line 1350, in _run_fn target_list, run_metadata) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\client\session.py", line 1443, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.NotFoundError: Key conv-reg1-stage1/BatchNorm/beta not found in checkpoint [[{{node save/RestoreV2}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\saver.py", line 1299, in restore {self.saver_def.filename_tensor_name: save_path}) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\client\session.py", line 958, in run run_metadata_ptr) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\client\session.py", line 1181, in _run feed_dict_tensor, options, run_metadata) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\client\session.py", line 1359, in _do_run run_metadata) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\client\session.py", line 1384, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.NotFoundError: Key conv-reg1-stage1/BatchNorm/beta not found in checkpoint [[node save/RestoreV2 (defined at train/test.py:67) ]]

Original stack trace for 'save/RestoreV2': File "train/test.py", line 354, in test_from_rgb_detection(FLAGS.output+'.pickle', FLAGS.output) File "train/test.py", line 216, in test_from_rgb_detection sess, ops = get_session_and_ops(batch_size=batch_size, num_point=NUM_POINT) File "train/test.py", line 67, in get_session_and_ops saver = tf.compat.v1.train.Saver() File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\saver.py", line 836, in init self.build() File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\saver.py", line 848, in build self._build(self._filename, build_save=True, build_restore=True) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\saver.py", line 886, in _build build_restore=build_restore) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\saver.py", line 516, in _build_internal restore_sequentially, reshape) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\saver.py", line 336, in _AddRestoreOps restore_sequentially) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\saver.py", line 583, in bulk_restore return io_ops.restore_v2(filename_tensor, names, slices, dtypes) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\ops\gen_io_ops.py", line 1523, in restore_v2 name=name) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 744, in _apply_op_helper attrs=attr_protos, op_def=op_def) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\framework\ops.py", line 3485, in _create_op_internal op_def=op_def) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\framework\ops.py", line 1949, in init self._traceback = tf_stack.extract_stack()

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\py_checkpoint_reader.py", line 70, in get_tensor self, compat.as_bytes(tensor_str)) RuntimeError: Key _CHECKPOINTABLE_OBJECT_GRAPH not found in checkpoint

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\saver.py", line 1309, in restore names_to_keys = object_graph_key_mapping(save_path) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\saver.py", line 1627, in object_graph_key_mapping object_graph_string = reader.get_tensor(trackable.OBJECT_GRAPH_PROTO_KEY) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\py_checkpoint_reader.py", line 74, in get_tensor error_translator(e) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\py_checkpoint_reader.py", line 35, in error_translator raise errors_impl.NotFoundError(None, None, error_message) tensorflow.python.framework.errors_impl.NotFoundError: Key _CHECKPOINTABLE_OBJECT_GRAPH not found in checkpoint

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "train/test.py", line 354, in test_from_rgb_detection(FLAGS.output+'.pickle', FLAGS.output) File "train/test.py", line 216, in test_from_rgb_detection sess, ops = get_session_and_ops(batch_size=batch_size, num_point=NUM_POINT) File "train/test.py", line 76, in get_session_and_ops saver.restore(sess, MODEL_PATH) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\saver.py", line 1315, in restore err, "a Variable name or other graph key that is missing") tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Key conv-reg1-stage1/BatchNorm/beta not found in checkpoint [[node save/RestoreV2 (defined at train/test.py:67) ]]

Original stack trace for 'save/RestoreV2': File "train/test.py", line 354, in test_from_rgb_detection(FLAGS.output+'.pickle', FLAGS.output) File "train/test.py", line 216, in test_from_rgb_detection sess, ops = get_session_and_ops(batch_size=batch_size, num_point=NUM_POINT) File "train/test.py", line 67, in get_session_and_ops saver = tf.compat.v1.train.Saver() File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\saver.py", line 836, in init self.build() File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\saver.py", line 848, in build self._build(self._filename, build_save=True, build_restore=True) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\saver.py", line 886, in _build build_restore=build_restore) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\saver.py", line 516, in _build_internal restore_sequentially, reshape) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\saver.py", line 336, in _AddRestoreOps restore_sequentially) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\training\saver.py", line 583, in bulk_restore return io_ops.restore_v2(filename_tensor, names, slices, dtypes) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\ops\gen_io_ops.py", line 1523, in restore_v2 name=name) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 744, in _apply_op_helper attrs=attr_protos, op_def=op_def) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\framework\ops.py", line 3485, in _create_op_internal op_def=op_def) File "D:\Anaconda\envs\tf\lib\site-packages\tensorflow\python\framework\ops.py", line 1949, in init self._traceback = tf_stack.extract_stack()

Thank you for participating in our evaluation! 命令语法不正确。 Loading detections... number of files for evaluation: 0 done. Finished 2D bounding box eval. Finished Birdeye eval. Finished 3D bounding box eval. Your evaluation results are available at: train/detection_results_v1