2018-11-09 20:50:02.033559: W tensorflow/core/kernels/queue_base.cc:277] _8_input_producer_2: Skipping cancelled enqueue attempt with queue not closed
2018-11-09 20:50:02.033665: W tensorflow/core/kernels/queue_base.cc:277] _3_input_producer_1: Skipping cancelled enqueue attempt with queue not closed
2018-11-09 20:50:02.033703: W tensorflow/core/kernels/queue_base.cc:277] _0_input_producer: Skipping cancelled enqueue attempt with queue not closed
2018-11-09 20:50:02.033741: W tensorflow/core/kernels/queue_base.cc:277] _5_shuffle_batch_1/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed
2018-11-09 20:50:02.033824: W tensorflow/core/kernels/queue_base.cc:277] _6_shuffle_batch_2/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed
2018-11-09 20:50:02.033841: W tensorflow/core/kernels/queue_base.cc:277] _6_shuffle_batch_2/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed
2018-11-09 20:50:02.033853: W tensorflow/core/kernels/queue_base.cc:277] _6_shuffle_batch_2/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed
2018-11-09 20:50:02.033878: W tensorflow/core/kernels/queue_base.cc:277] _6_shuffle_batch_2/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed
2018-11-09 20:50:02.033908: W tensorflow/core/kernels/queue_base.cc:277] _5_shuffle_batch_1/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed
2018-11-09 20:50:02.033936: W tensorflow/core/kernels/queue_base.cc:277] _5_shuffle_batch_1/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed
2018-11-09 20:50:02.033948: W tensorflow/core/kernels/queue_base.cc:277] _5_shuffle_batch_1/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed
2018-11-09 20:50:02.033980: W tensorflow/core/kernels/queue_base.cc:277] _1_shuffle_batch/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed
2018-11-09 20:50:02.034024: W tensorflow/core/kernels/queue_base.cc:277] _1_shuffle_batch/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed
2018-11-09 20:50:02.034039: W tensorflow/core/kernels/queue_base.cc:277] _1_shuffle_batch/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed
2018-11-09 20:50:02.034066: W tensorflow/core/kernels/queue_base.cc:277] _1_shuffle_batch/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed
Traceback (most recent call last):
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call
INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.CancelledError'>, Enqueue operation was cancelled
[[Node: input_producer/input_producer_EnqueueMany = QueueEnqueueManyV2[Tcomponents=[DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input_producer, input_producer/RandomShuffle)]]
return fn(*args)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [2,1024] vs. [0,1024]
[[Node: GPU_1/generator_1/fully_connected/batchnorm/mul_1 = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:1"](GPU_1/generator_1/fully_connected/BiasAdd, GPU_1/generator_1/fully_connected/batchnorm/mul)]]
[[Node: global_norm_151/L2Loss_9/_1098 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:1", send_device_incarnation=1, tensor_name="edge_37370_global_norm_151/L2Loss_9", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/main_single.py", line 186, in
status, appendix = launch_training(d_params)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/main_single.py", line 100, in launch_training
status = train_module.train(kwargs)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/src_single/train_single.py", line 219, in train
run_metadata=run_metadata)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 900, in run
run_metadata_ptr)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 1135, in _run
feed_dict_tensor, options, run_metadata)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
run_metadata)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [2,1024] vs. [0,1024]
[[Node: GPU_1/generator_1/fully_connected/batchnorm/mul_1 = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:1"](GPU_1/generator_1/fully_connected/BiasAdd, GPU_1/generator_1/fully_connected/batchnorm/mul)]]
[[Node: global_norm_151/L2Loss_9/_1098 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:1", send_device_incarnation=1, tensor_name="edge_37370_global_norm_151/L2Loss_9", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
Caused by op 'GPU_1/generator_1/fully_connected/batchnorm/mul_1', defined at:
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/main_single.py", line 186, in
status, appendix = launch_training(d_params)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/main_single.py", line 100, in launch_training
status = train_module.train(kwargs)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/src_single/train_single.py", line 146, in train
optimizer=optimizer)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/src_single/graph_single.py", line 153, in build_multi_tower_graph
optim_d=optim_d)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/src_single/graph_single.py", line 245, in build_single_graph
output_channel=3)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/src_single/graph_single.py", line 226, in transfer
scope_name=generator_scope)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/src_single/models_mru.py", line 207, in generator_skip
normalizer_params=normalizer_params_g)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/src_single/mru.py", line 87, in fully_connected
linear_out = normalizer_fn(linear_out, activation_fn=None, *normalizer_params)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/src_single/models_mru.py", line 36, in batchnorm
1e-5)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/ops/nn_impl.py", line 835, in batch_normalization
return x math_ops.cast(inv, x.dtype) + math_ops.cast(
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/ops/math_ops.py", line 847, in binary_op_wrapper
return func(x, y, name=name)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/ops/math_ops.py", line 1091, in _mul_dispatch
return gen_math_ops.mul(x, y, name=name)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/ops/gen_math_ops.py", line 4759, in mul
"Mul", x=x, y=y, name=name)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/framework/ops.py", line 3414, in create_op
op_def=op_def)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/framework/ops.py", line 1740, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): Incompatible shapes: [2,1024] vs. [0,1024]
[[Node: GPU_1/generator_1/fully_connected/batchnorm/mul_1 = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:1"](GPU_1/generator_1/fully_connected/BiasAdd, GPU_1/generator_1/fully_connected/batchnorm/mul)]]
[[Node: global_norm_151/L2Loss_9/_1098 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:1", send_device_incarnation=1, tensor_name="edge_37370_global_norm_151/L2Loss_9", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
I really dont what's wrong with that. I would appreciate if anyone could help me.
After trying a lot of possible ways, I am really down. By the way, my tensorflow's version is 1.9.
2018-11-09 20:50:02.033559: W tensorflow/core/kernels/queue_base.cc:277] _8_input_producer_2: Skipping cancelled enqueue attempt with queue not closed 2018-11-09 20:50:02.033665: W tensorflow/core/kernels/queue_base.cc:277] _3_input_producer_1: Skipping cancelled enqueue attempt with queue not closed 2018-11-09 20:50:02.033703: W tensorflow/core/kernels/queue_base.cc:277] _0_input_producer: Skipping cancelled enqueue attempt with queue not closed 2018-11-09 20:50:02.033741: W tensorflow/core/kernels/queue_base.cc:277] _5_shuffle_batch_1/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed 2018-11-09 20:50:02.033824: W tensorflow/core/kernels/queue_base.cc:277] _6_shuffle_batch_2/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed 2018-11-09 20:50:02.033841: W tensorflow/core/kernels/queue_base.cc:277] _6_shuffle_batch_2/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed 2018-11-09 20:50:02.033853: W tensorflow/core/kernels/queue_base.cc:277] _6_shuffle_batch_2/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed 2018-11-09 20:50:02.033878: W tensorflow/core/kernels/queue_base.cc:277] _6_shuffle_batch_2/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed 2018-11-09 20:50:02.033908: W tensorflow/core/kernels/queue_base.cc:277] _5_shuffle_batch_1/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed 2018-11-09 20:50:02.033936: W tensorflow/core/kernels/queue_base.cc:277] _5_shuffle_batch_1/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed 2018-11-09 20:50:02.033948: W tensorflow/core/kernels/queue_base.cc:277] _5_shuffle_batch_1/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed 2018-11-09 20:50:02.033980: W tensorflow/core/kernels/queue_base.cc:277] _1_shuffle_batch/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed 2018-11-09 20:50:02.034024: W tensorflow/core/kernels/queue_base.cc:277] _1_shuffle_batch/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed 2018-11-09 20:50:02.034039: W tensorflow/core/kernels/queue_base.cc:277] _1_shuffle_batch/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed 2018-11-09 20:50:02.034066: W tensorflow/core/kernels/queue_base.cc:277] _1_shuffle_batch/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed Traceback (most recent call last): File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.CancelledError'>, Enqueue operation was cancelled [[Node: input_producer/input_producer_EnqueueMany = QueueEnqueueManyV2[Tcomponents=[DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input_producer, input_producer/RandomShuffle)]] return fn(*args) File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [2,1024] vs. [0,1024] [[Node: GPU_1/generator_1/fully_connected/batchnorm/mul_1 = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:1"](GPU_1/generator_1/fully_connected/BiasAdd, GPU_1/generator_1/fully_connected/batchnorm/mul)]] [[Node: global_norm_151/L2Loss_9/_1098 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:1", send_device_incarnation=1, tensor_name="edge_37370_global_norm_151/L2Loss_9", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/main_single.py", line 186, in
status, appendix = launch_training(d_params)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/main_single.py", line 100, in launch_training
status = train_module.train(kwargs)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/src_single/train_single.py", line 219, in train
run_metadata=run_metadata)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 900, in run
run_metadata_ptr)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 1135, in _run
feed_dict_tensor, options, run_metadata)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
run_metadata)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [2,1024] vs. [0,1024]
[[Node: GPU_1/generator_1/fully_connected/batchnorm/mul_1 = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:1"](GPU_1/generator_1/fully_connected/BiasAdd, GPU_1/generator_1/fully_connected/batchnorm/mul)]]
[[Node: global_norm_151/L2Loss_9/_1098 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:1", send_device_incarnation=1, tensor_name="edge_37370_global_norm_151/L2Loss_9", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
Caused by op 'GPU_1/generator_1/fully_connected/batchnorm/mul_1', defined at: File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/main_single.py", line 186, in
status, appendix = launch_training(d_params)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/main_single.py", line 100, in launch_training
status = train_module.train(kwargs)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/src_single/train_single.py", line 146, in train
optimizer=optimizer)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/src_single/graph_single.py", line 153, in build_multi_tower_graph
optim_d=optim_d)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/src_single/graph_single.py", line 245, in build_single_graph
output_channel=3)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/src_single/graph_single.py", line 226, in transfer
scope_name=generator_scope)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/src_single/models_mru.py", line 207, in generator_skip
normalizer_params=normalizer_params_g)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/src_single/mru.py", line 87, in fully_connected
linear_out = normalizer_fn(linear_out, activation_fn=None, *normalizer_params)
File "/home/xuqi/PycharmProjects/sketchyGAN/SketchyGAN/src_single/models_mru.py", line 36, in batchnorm
1e-5)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/ops/nn_impl.py", line 835, in batch_normalization
return x math_ops.cast(inv, x.dtype) + math_ops.cast(
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/ops/math_ops.py", line 847, in binary_op_wrapper
return func(x, y, name=name)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/ops/math_ops.py", line 1091, in _mul_dispatch
return gen_math_ops.mul(x, y, name=name)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/ops/gen_math_ops.py", line 4759, in mul
"Mul", x=x, y=y, name=name)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/framework/ops.py", line 3414, in create_op
op_def=op_def)
File "/home/xuqi/anaconda2/envs/py34/lib/python3.4/site-packages/tensorflow/python/framework/ops.py", line 1740, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): Incompatible shapes: [2,1024] vs. [0,1024] [[Node: GPU_1/generator_1/fully_connected/batchnorm/mul_1 = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:1"](GPU_1/generator_1/fully_connected/BiasAdd, GPU_1/generator_1/fully_connected/batchnorm/mul)]] [[Node: global_norm_151/L2Loss_9/_1098 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:1", send_device_incarnation=1, tensor_name="edge_37370_global_norm_151/L2Loss_9", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
I really dont what's wrong with that. I would appreciate if anyone could help me. After trying a lot of possible ways, I am really down. By the way, my tensorflow's version is 1.9.