AntonMu / TrainYourOwnYOLO

Train a state-of-the-art yolov3 object detector from scratch!
Other
651 stars 413 forks source link

Getting ValueError when running Train_YOLO.py in Colab #164

Closed bushra-hafeez closed 4 years ago

bushra-hafeez commented 4 years ago

Hello, I am training the model on my own dataset on google colab and I have followed all the instructions given in Google Colab Tutorial carefully word by word. I specified the path of training images as command-line argument in the Train_YOLO.py script. But when I run the training script, the following error occurs;

ValueError: Tensor conversion requested dtype float32_ref for Tensor with dtype float32: <tf.Tensor 'training/Adam/Adam/conv2d_59/kernel/m/Initializer/zeros:0' shape=(1, 1, 1024, 30) dtype=float32>

Thanks

johnjhr commented 4 years ago

same problem here.


Using TensorFlow backend.
Create YOLOv3 model with 9 anchors and 1 classes.
Load weights /content/TrainYourOwnYOLO/2_Training/src/keras_yolo3/yolo.h5.
Freeze the first 249 layers of total 252 layers.
8888888888888888888*********************************98888888888888888888888888888888888
['/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/m9_Color.png 108,289,397,445,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/m8_Color.png 84,248,514,441,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/savedtest.jpg 22,44,629,446,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/m5_Color.png 103,155,516,400,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/m7_Color.png 132,225,442,436,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/m3_Color.png 31,80,593,469,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/m6_Color.png 184,162,473,467,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/m4_Color.png 43,93,640,459,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/m12_Color.png 118,245,444,445,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/m1_Color.png 71,35,534,445,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/m10_Color.png 100,117,526,425,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/m2_Color_Color.png 22,141,615,460,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/m11_Color.png 110,99,536,364,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/v8_Color.png 153,172,593,416,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/v7_Color.png 184,225,533,441,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/v3_Color.png 166,213,553,449,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/v2_Color.png 237,180,540,480,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/v6_Color.png 153,223,498,440,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/tu_Color.png 273,17,541,402,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/test_Color.png 216,133,406,478,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/test2_Color.png 141,95,372,398,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/m_Color.png 189,89,535,448,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/m5_C_Color.png 198,58,538,406,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/j2_Color.png 168,136,430,474,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/c1_Color.png 228,72,571,354,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/c88_Color.png 187,20,498,327,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/c4_Color.png 287,34,505,427,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/kk_Color.png 198,96,537,348,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/b4_Color.png 131,265,422,435,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/b1_Color.png 291,231,480,366,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/b54_Color.png 56,135,486,353,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/b7_Color.png 220,169,622,352,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/b2_Color.png 204,114,456,385,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/b_Color.png 133,65,513,420,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/c99_Color.png 144,241,487,448,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/dark_Color.png 164,156,615,370,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/b8_Color.png 99,300,552,458,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/lll_Color.png 220,75,520,437,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/v4_Color.png 166,124,491,429,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/v1_Color.png 202,177,597,377,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/v5_Color.png 273,165,462,475,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/n1_Color.png 65,156,328,470,0\n', '/content/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export/t7_Color.png 218,133,571,409,0']
Train on 39 samples, val on 4 samples, with batch size 32.
Traceback (most recent call last):
  File "Train_YOLO.py", line 229, in <module>
    callbacks=[logging, checkpoint],
  File "/usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/keras/engine/training.py", line 1418, in fit_generator
    initial_epoch=initial_epoch)
  File "/usr/local/lib/python3.6/dist-packages/keras/engine/training_generator.py", line 40, in fit_generator
    model._make_train_function()
  File "/usr/local/lib/python3.6/dist-packages/keras/engine/training.py", line 509, in _make_train_function
    loss=self.total_loss)
  File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py", line 504, in get_updates
    return [self.apply_gradients(grads_and_vars)]
  File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py", line 433, in apply_gradients
    self._create_slots(var_list)
  File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/optimizer_v2/adam.py", line 149, in _create_slots
    self.add_slot(var, 'm')
  File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py", line 585, in add_slot
    initial_value=initial_value)
  File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 260, in __call__
    return cls._variable_v2_call(*args, **kwargs)
  File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 254, in _variable_v2_call
    shape=shape)
  File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 235, in <lambda>
    previous_getter = lambda **kws: default_variable_creator_v2(None, **kws)
  File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variable_scope.py", line 2552, in default_variable_creator_v2
    shape=shape)
  File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 262, in __call__
    return super(VariableMetaclass, cls).__call__(*args, **kwargs)
  File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/resource_variable_ops.py", line 1406, in __init__
    distribute_strategy=distribute_strategy)
  File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/resource_variable_ops.py", line 1538, in _init_from_args
    name="initial_value", dtype=dtype)
  File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py", line 1184, in convert_to_tensor
    return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
  File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py", line 1242, in convert_to_tensor_v2
    as_ref=False)
  File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py", line 1273, in internal_convert_to_tensor
    (dtype.name, value.dtype.name, value))

ValueError: Tensor conversion requested dtype float32_ref for Tensor with dtype float32: <tf.Tensor 'training/Adam/Adam/conv2d_59/kernel/m/Initializer/zeros:0' shape=(1, 1, 1024, 18) dtype=float32>
johnjhr commented 4 years ago

Hello, I am training the model on my own dataset on google colab and I have followed all the instructions given in Google Colab Tutorial carefully word by word. I specified the path of training images as command-line argument in the Train_YOLO.py script. But when I run the training script, the following error occurs;

ValueError: Tensor conversion requested dtype float32_ref for Tensor with dtype float32: <tf.Tensor 'training/Adam/Adam/conv2d_59/kernel/m/Initializer/zeros:0' shape=(1, 1, 1024, 30) dtype=float32>

Thanks

fixed by running pip install -r requirements.txt in Trainyourownyolo root folder

bushra-hafeez commented 4 years ago

Got it, Thanks