david8862 / keras-YOLOv3-model-set

end-to-end YOLOv4/v3/v2 object detection pipeline, implemented on tf.keras with different technologies
MIT License
640 stars 222 forks source link

Extending existing training dataset with new class and images fails #192

Open mdatre opened 3 years ago

mdatre commented 3 years ago

Hi,

I am trying to do transfer learning. I have a pre-labeled image dataset on which train.py runs correctly. But if I extend that dataset by adding a new class to the existing class file and adding new PNG images along with the annotation generated using the LabelImg tool, and converting LabelImg annotations in the expected xmin,ymin,xmax,ymax,class format in one single file as given in the readme, train.py fails soon after starting to train with the following error.

Am I missing something?

tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
  (0) Invalid argument:  Input to reshape is a tensor with 1406080 values, but the requested shape requires a multiple of 44785
         [[node functional_1/yolo_loss/Reshape_3 (defined at /home/ubuntu/logo-detection/David-keras-YOLO3/yolo2/postprocess.py:51) ]]
         [[ReadVariableOp_1/_44]]
  (1) Invalid argument:  Input to reshape is a tensor with 1406080 values, but the requested shape requires a multiple of 44785
         [[node functional_1/yolo_loss/Reshape_3 (defined at /home/ubuntu/logo-detection/David-keras-YOLO3/yolo2/postprocess.py:51) ]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_7754]

Errors may have originated from an input operation.
Input Source operations connected to node functional_1/yolo_loss/Reshape_3:
 functional_1/conv2d_21/BiasAdd (defined at train.py:245)

Input Source operations connected to node functional_1/yolo_loss/Reshape_3:
 functional_1/conv2d_21/BiasAdd (defined at train.py:245)

Function call stack:
train_function -> train_function
david8862 commented 3 years ago

@mdatre still not very clear about the case you try. Are you loading the pretrained model on the new dataset train process, or just re-train the model from draft?