david8862 / keras-YOLOv3-model-set

end-to-end YOLOv4/v3/v2 object detection pipeline, implemented on tf.keras with different technologies
MIT License
639 stars 220 forks source link

training dataset #112

Open Mostafa-elgendy opened 4 years ago

Mostafa-elgendy commented 4 years ago

I am using your code to train my own dataset. I already used tiny-yolov3 model for my application.
I used this commands for training: python train.py --model_type=tiny_yolo3_darknet_lite --anchors_path=configs/tiny_yolo3_anchors.txt --annotation_file=configs/aruco_train_annotations.txt --val_annotation_file=configs/aruco_validate_annotations.txt --classes_path=configs/aruco_classes.txt --eval_online --eval_epoch_interval=5 --save_eval_checkpoint --total_epoch=60 --batch_size=16 I ran the above command only once and at the end got the following results:

Epoch 00060: val_loss did not improve from 2.24520 Eval model: 100% 7200/7200 [01:51<00:00, 64.58it/s] Pascal VOC AP evaluation id_001: AP 1.0000, precision 0.9160, recall 1.0000 id_002: AP 1.0000, precision 0.9934, recall 1.0000 id_003: AP 0.9917, precision 1.0000, recall 0.9917 id_004: AP 1.0000, precision 0.9646, recall 1.0000 id_005: AP 1.0000, precision 0.9509, recall 1.0000 id_006: AP 1.0000, precision 0.9917, recall 1.0000 id_007: AP 1.0000, precision 0.9677, recall 1.0000 id_008: AP 1.0000, precision 0.8837, recall 1.0000 id_009: AP 1.0000, precision 0.9983, recall 1.0000 id_010: AP 1.0000, precision 0.9274, recall 1.0000 id_011: AP 1.0000, precision 0.8276, recall 1.0000 id_012: AP 1.0000, precision 0.9479, recall 1.0000 mAP@IoU=0.50 result: 99.930556 mPrec@IoU=0.50 result: 94.743220 mRec@IoU=0.50 result: 99.930556

david8862 commented 4 years ago

@Mostafa-elgendy seems you've got a well trained model and the mAP is quite high (a little bit too high...). Maybe you can dump out inference model and use "eval.py" to evaluate with your testset. "--save_result" would be usefull when evalute to store the visualized detection result to confirm if the model works as expect.

For cross-validation, there's an option "--data_shuffle" in train.py for that. You can try if needed.

Mostafa-elgendy commented 4 years ago

Thanks a lot for your answer. I already made an evaluation as you said using "eval.py" to evaluate with my testset. But I want to ask also

Mostafa-elgendy commented 4 years ago

I want to ask also how to use a k-fold validation to evaluate it?

david8862 commented 4 years ago

@Mostafa-elgendy

  1. If the visualized evaluation result is fine, then the model has been able to be used for its object detection task. No need to re-train it unless you have more training data.
  2. To compare accuray performance of 2 different object detection models, the standard way is just checking the mAP/AP result of them.
  3. For k-fold cross validation, I didn't prepare related tool for that. Maybe you can create a wrapper script for eval.py to do it on val/test set.