marcbelmont / cnn-watermark-removal

Fully convolutional deep neural network to remove transparent overlays from images
1.26k stars 226 forks source link

TypeError: Input 'filenames' of 'TFRecordDataset' Op has type float32 that does not match expected type of string #27

Closed kmrabhay closed 5 years ago

kmrabhay commented 5 years ago

I added my 25 images with watermark custom images in data/VOCdevkit/VOC2012/JPEGImages as described and trained using given command.

In the last line return (next_element, [iterator.make_initializer(x) for x in [train, val]]) this portion of the code gives above error.

def dataset_split(dataset_fn, split):

import pdb; pdb.set_trace();

records = get_records()
split = int(len(records) * split)
train, val = dataset_fn(records[:split]), dataset_fn(records[split:])
iterator = tf.contrib.data.Iterator.from_structure(
    train.output_types, train.output_shapes)
# import ipdb; ipdb.set_trace();
next_element = iterator.get_next()
return (next_element,
        [iterator.make_initializer(x) for x in [train, val]])

What could be the issue? Thanks.

marcbelmont commented 5 years ago

Can you print here the variable records ?

kmrabhay commented 5 years ago

It prints ['data/voc-0.tfrecords']

marcbelmont commented 5 years ago

Those 25 images, is it for training or inference? If it's for inference, this is not the correct way of proceeding. See last paragraph of Usage.

If it's for training, you may be passing an empty list here https://github.com/marcbelmont/cnn-watermark-removal/blob/master/dataset.py#L110 which is not OK. We assume that there are records for both training and testing.

kmrabhay commented 5 years ago

Right. Thanks. I changed if i % 200 == 0: to if i % 2 == 0: to create more tf records(initially it was only 1 tf records since I am having only 25 images over all in data/VOCdevkit/VOC2012/JPEGImages for both training and testing )

kmrabhay commented 5 years ago

Now I am getting this error.

Caused by op 'IteratorGetNext', defined at:
  File "watermarks.py", line 297, in <module>
    tf.app.run()
  File "/home/abhay/mml/venv-nerapi/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "watermarks.py", line 292, in main
    train(sess, globals()[FLAGS.dataset])
  File "watermarks.py", line 185, in train
    next_image, iterator_inits = dataset_split(dataset, .8)
  File "/home/abhay/mml/datascience-practice/cnn-watermark-removal/dataset.py", line 77, in dataset_split
    next_element = iterator.get_next()
  File "/home/abhay/mml/venv-nerapi/lib/python3.6/site-packages/tensorflow/contrib/data/python/ops/dataset_ops.py", line 304, in get_next
    name=name))
  File "/home/abhay/mml/venv-nerapi/lib/python3.6/site-packages/tensorflow/python/ops/gen_dataset_ops.py", line 379, in iterator_get_next
    output_shapes=output_shapes, name=name)
  File "/home/abhay/mml/venv-nerapi/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
    op_def=op_def)
  File "/home/abhay/mml/venv-nerapi/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2630, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/abhay/mml/venv-nerapi/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1204, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

OutOfRangeError (see above for traceback): Attempted to repeat an empty dataset infinitely.
     [[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?,120,120,3]], output_types=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](Iterator)]]

Got above error on line

next_element = iterator.get_next()

Is there something I changed is wrong? Or What else can I do to be able to train the model successfully?

kmrabhay commented 5 years ago

Fixed above error by increasing data set from 25 to 60.

marcbelmont commented 5 years ago

Well done :)