i have downloaded the ADE 20k dataset and convert each png to width of 512px.
the pandas dataframe generated as below(l is a list filled with resized png path)
`
d = pd.DataFrame(l, columns=['path'])
d.to_hdf("noise.h5", key='df')
`,
and directories.train has been set to noise.h5 file path, during training, exception throwed as below
`
Training on dataset cityscapes
Building computational graph ...
Training on cityscapes
Training on cityscapes
<------------ Building global image generator architecture ------------>
Sampling noise...
Real image shape: [None, None, None, 3]
Reconstruction shape: [None, 512, 1024, 3]
<------------ Building multiscale discriminator architecture ------------>
Building discriminator D(x)
Shape of x: [None, None, None, 3]
Shape of x downsampled by factor 2: [None, None, None, 3]
Shape of x downsampled by factor 4: [None, None, None, 3]
<------------ Building multiscale discriminator architecture ------------>
Building discriminator D(G(z))
Shape of x: [None, 512, 1024, 3]
Shape of x downsampled by factor 2: [None, 256, 512, 3]
Shape of x downsampled by factor 4: [None, 128, 256, 3]2018-11-13 16:59:16.311330: I
tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FM
A
2018-11-13 16:59:27.145299: W tensorflow/core/kernels/data/cache_dataset_ops.cc:770] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the datasetwill be discarded. This can happen if you have an input pipeline similar to dataset.cache().take(k).repeat(). You should use dataset.take(k).cache().repeat() instead.
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,24,32,8] vs. shape[1] = [1,32,64,8]
[[{{node generator/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](generator/quantizer_image/Round, generator/noise_generator/conv_out/conv2d/BiasAdd, generator/quantizer_image/ArgMin/dimension)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train.py", line 119, in
main()
File "train.py", line 116, in main
train(config_train, args)
File "train.py", line 70, in train
start_time, epoch, args.name, G_loss_best, D_loss_best)
File "/home/xiakai/software/generative-compression-master/utils.py", line 78, in run_diagnostics
G_loss, D_loss, summary = sess.run([model.G_loss, model.D_loss, model.merge_op], feed_dict=feed_dict_test)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,24,32,8] vs. shape[1] = [1,32,64,8]
[[node generator/concat (defined at /home/xiakai/software/generative-compression-master/model.py:77) = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](generator/quantizer_image/Round, generator/noise_generator/conv_out/conv2d/BiasAdd, generator/quantizer_image/ArgMin/dimension)]]
Caused by op 'generator/concat', defined at:
File "train.py", line 119, in
main()
File "train.py", line 116, in main
train(config_train, args)
File "train.py", line 34, in train
gan = Model(config, paths, name=args.name, dataset=args.dataset)
File "/home/xiakai/software/generative-compression-master/model.py", line 77, in init
self.z = tf.concat([self.w_hat, Gv], axis=-1)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/array_ops.py", line 1124, in concat
return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 1033, in concat_v2
"ConcatV2", values=values, axis=axis, name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1770, in init
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): ConcatOp : Dimensions of inputs should match: shape[0] = [1,24,32,8] vs. shape[1] = [1,32,64,8]
[[node generator/concat (defined at /home/xiakai/software/generative-compression-master/model.py:77) = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](generator/quantizer_image/Round, generator/noise_generator/conv_out/conv2d/BiasAdd, generator/quantizer_image/ArgMin/dimension)]]
`
i have downloaded the ADE 20k dataset and convert each png to width of 512px.
the pandas dataframe generated as below(l is a list filled with resized png path)
` d = pd.DataFrame(l, columns=['path'])
d.to_hdf("noise.h5", key='df') `, and directories.train has been set to noise.h5 file path, during training, exception throwed as below
` Training on dataset cityscapes
Building computational graph ...
Training on cityscapes
Training on cityscapes
<------------ Building global image generator architecture ------------>
Sampling noise...
Real image shape: [None, None, None, 3]
Reconstruction shape: [None, 512, 1024, 3]
<------------ Building multiscale discriminator architecture ------------>
Building discriminator D(x)
Shape of x: [None, None, None, 3]
Shape of x downsampled by factor 2: [None, None, None, 3]
Shape of x downsampled by factor 4: [None, None, None, 3]
<------------ Building multiscale discriminator architecture ------------>
Building discriminator D(G(z))
Shape of x: [None, 512, 1024, 3]
Shape of x downsampled by factor 2: [None, 256, 512, 3]
Shape of x downsampled by factor 4: [None, 128, 256, 3]2018-11-13 16:59:16.311330: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FM A 2018-11-13 16:59:27.145299: W tensorflow/core/kernels/data/cache_dataset_ops.cc:770] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the datasetwill be discarded. This can happen if you have an input pipeline similar to
dataset.cache().take(k).repeat()
. You should usedataset.take(k).cache().repeat()
instead. Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1334, in _do_call return fn(*args) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1319, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,24,32,8] vs. shape[1] = [1,32,64,8] [[{{node generator/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](generator/quantizer_image/Round, generator/noise_generator/conv_out/conv2d/BiasAdd, generator/quantizer_image/ArgMin/dimension)]]During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "train.py", line 119, in
main()
File "train.py", line 116, in main
train(config_train, args)
File "train.py", line 70, in train
start_time, epoch, args.name, G_loss_best, D_loss_best)
File "/home/xiakai/software/generative-compression-master/utils.py", line 78, in run_diagnostics
G_loss, D_loss, summary = sess.run([model.G_loss, model.D_loss, model.merge_op], feed_dict=feed_dict_test)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,24,32,8] vs. shape[1] = [1,32,64,8]
[[node generator/concat (defined at /home/xiakai/software/generative-compression-master/model.py:77) = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](generator/quantizer_image/Round, generator/noise_generator/conv_out/conv2d/BiasAdd, generator/quantizer_image/ArgMin/dimension)]]
Caused by op 'generator/concat', defined at: File "train.py", line 119, in
main()
File "train.py", line 116, in main
train(config_train, args)
File "train.py", line 34, in train
gan = Model(config, paths, name=args.name, dataset=args.dataset)
File "/home/xiakai/software/generative-compression-master/model.py", line 77, in init
self.z = tf.concat([self.w_hat, Gv], axis=-1)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/array_ops.py", line 1124, in concat
return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 1033, in concat_v2
"ConcatV2", values=values, axis=axis, name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1770, in init
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): ConcatOp : Dimensions of inputs should match: shape[0] = [1,24,32,8] vs. shape[1] = [1,32,64,8] [[node generator/concat (defined at /home/xiakai/software/generative-compression-master/model.py:77) = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](generator/quantizer_image/Round, generator/noise_generator/conv_out/conv2d/BiasAdd, generator/quantizer_image/ArgMin/dimension)]] `
how could i fix it. thantks