NanoNets / number-plate-detection

Automatic License Plate Reader using tensorflow attention OCR
64 stars 24 forks source link

ValueError: Negative dimension size caused by subtracting 3 from 2 #10

Open akmalkadi opened 3 years ago

akmalkadi commented 3 years ago

Greetings,

I have an issue with the training but first I would like to ask for clarification about the following: "Having stored our cropped images of equal sizes in a different directory.."

Do you mean by "equal sizes" that all images should have the same width and height (Squares)? or is it okay to be rectangles but all images should have the same sizes?

What makes me confused is that all the images you are using in the example are squares (200x200). Also, after making my dataset with equal sizes rectangles (1900x17). I got the following error during the training step :

Traceback (most recent call last):
  File "train.py", line 209, in <module>
    app.run()
  File "/opt/anaconda3/envs/tf2/lib/python2.7/site-packages/tensorflow_core/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/opt/anaconda3/envs/tf2/lib/python2.7/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/opt/anaconda3/envs/tf2/lib/python2.7/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "train.py", line 197, in main
    endpoints = model.create_base(data.images, data.labels_one_hot)
  File "/home/user/Projects/dlOCR/python/model.py", line 363, in create_base
    for i, v in enumerate(views)
  File "/home/user/Projects/dlOCR/python/model.py", line 208, in conv_tower_fn
    images, final_endpoint=mparams.final_endpoint)
  File "/opt/anaconda3/envs/tf2/lib/python2.7/site-packages/tensorflow_core/contrib/slim/python/slim/nets/inception_v3.py", line 143, in inception_v3_base
    net = layers.conv2d(net, depth(192), [3, 3], scope=end_point)
  File "/opt/anaconda3/envs/tf2/lib/python2.7/site-packages/tensorflow_core/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args
    return func(*args, **current_args)
  File "/opt/anaconda3/envs/tf2/lib/python2.7/site-packages/tensorflow_core/contrib/layers/python/layers/layers.py", line 1159, in convolution2d
    conv_dims=2)
  File "/opt/anaconda3/envs/tf2/lib/python2.7/site-packages/tensorflow_core/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args
    return func(*args, **current_args)
  File "/opt/anaconda3/envs/tf2/lib/python2.7/site-packages/tensorflow_core/contrib/layers/python/layers/layers.py", line 1057, in convolution
    outputs = layer.apply(inputs)
  File "/opt/anaconda3/envs/tf2/lib/python2.7/site-packages/tensorflow_core/python/util/deprecation.py", line 324, in new_func
    return func(*args, **kwargs)
  File "/opt/anaconda3/envs/tf2/lib/python2.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 1700, in apply
    return self.__call__(inputs, *args, **kwargs)
  File "/opt/anaconda3/envs/tf2/lib/python2.7/site-packages/tensorflow_core/python/layers/base.py", line 548, in __call__
    outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
  File "/opt/anaconda3/envs/tf2/lib/python2.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 854, in __call__
    outputs = call_fn(cast_inputs, *args, **kwargs)
  File "/opt/anaconda3/envs/tf2/lib/python2.7/site-packages/tensorflow_core/python/autograph/impl/api.py", line 237, in wrapper
    raise e.ag_error_metadata.to_exception(e)
ValueError: in converted code:
    relative to /opt/anaconda3/envs/tf2/lib/python2.7/site-packages/tensorflow_core/python:

    keras/layers/convolutional.py:197 call
        outputs = self._convolution_op(inputs, self.kernel)
    ops/nn_ops.py:1134 __call__
        return self.conv_op(inp, filter)
    ops/nn_ops.py:639 __call__
        return self.call(inp, filter)
    ops/nn_ops.py:238 __call__
        name=self.name)
    ops/nn_ops.py:2010 conv2d
        name=name)
    ops/gen_nn_ops.py:1071 conv2d
        data_format=data_format, dilations=dilations, name=name)
    framework/op_def_library.py:794 _apply_op_helper
        op_def=op_def)
    util/deprecation.py:507 new_func
        return func(*args, **kwargs)
    framework/ops.py:3357 create_op
        attrs, op_def, compute_device)
    framework/ops.py:3426 _create_op_internal
        op_def=op_def)
    framework/ops.py:1770 __init__
        control_input_ops)
    framework/ops.py:1610 _create_c_op
        raise ValueError(str(e))

    ValueError: Negative dimension size caused by subtracting 3 from 2 for 'AttentionOcr_v1/conv_tower_fn/INCE/InceptionV3/Conv2d_4a_3x3/Conv2D' (op: 'Conv2D') with input shapes: [32,473,2,80], [3,3,80,192].

Please help me to solve this issue. Thank you

Update1:

The error is coming from this line in model.py: net, _ = inception.inception_v3_base(images, final_endpoint=mparams.final_endpoint)

I tried to debug the object images and got:

          print images
          print type(images)

Output:

Tensor("AttentionOcr_v1/split:0", shape=(32, 1900, 17, 3), dtype=float32)
<class 'tensorflow.python.framework.ops.Tensor'>

Update2: I just realized that my issue because the height in number_plates.py is less than 26. I tried 26 and more but I don't get the same error. In my case, the height is 17 and mostly will be less than 26. Any idea how to change the limit?

akmalkadi commented 3 years ago

I added two updates to the question. I hope someone here is reading what I am writing

akmalkadi commented 3 years ago

Update3: I scaled the images from 1900x17 to 1900x27 and it starts the training without error messages. Is there any reason why less than 27 pixels doesn't work? Is there anything I can do rather than the scaling to solve this issue? If the scaling works for me, what the justification that I can think about for this step?

thanks