ultralytics / yolov5

YOLOv5 πŸš€ in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.37k stars 16.26k forks source link

Question for Validation dataset VS Test dataset #4771

Closed Ronald-Kray closed 2 years ago

Ronald-Kray commented 3 years ago

@glenn-jocher

Hi there In general, the ratio for data splitting is said to be divided by Train:Val:test= 60:20:20. Is the test(20%) for preventing overfitting?

I think especially, yolov5 doesn't need to have a test dataset because best.pt comes out after training for stopping overfitting. In my case, the test is used videos to check confidence scores on unseen videos(Train:Val:test=80:20:unseen videos).

Please give me a comment on whether should I make a test set(eg. Train:Val:test= 60:20:20) to prevent overfitting.

Thanks for your great work.

❔Question

Additional context

glenn-jocher commented 3 years ago

@Ronald-Kray there's no right answer to this, it depends on the context, the amount of data, etc. test sets are typically holdout sets that lack labels that a competition organizer can use to verify submissions.

Our own dataset splitting function is here with defaults of 90/10/0, though this is more aggressive than the traditional 70/20/10. Naturally the more data you train the better your model will generalize. https://github.com/ultralytics/yolov5/blob/aa1859909c96d5e1fc839b2746b45038ee8465c9/utils/datasets.py#L837-L844

For more info see https://en.wikipedia.org/wiki/Training,_validation,_and_test_sets

bdeng3 commented 3 years ago

I also have a question with regard to validation set or test set. In the data yaml file we are asked to provide a path for the training set, and another path for the validation set.

Is the "validation set" in yolov5's context means "test set" (not being used in training at all, only seen once for prediction after the training). Or the "validation set" here is being used in training for hyper-parameter tuning?

glenn-jocher commented 3 years ago

@bdeng3 yes both are correct. Hyperparameters are evolved for best performance on val set, but the val set never contributes to training losses, so is not involved in gradient backpropagation and thus never directly affects the model.

bdeng3 commented 3 years ago

@glenn-jocher I see. Thanks for the great work!

Ronald-Kray commented 3 years ago

@glenn-jocher Here is more specific question.

I'm working on calculating mAP of the Object detection algorithms(Yolv5, Yolov4, EfficientDet).

In my opinion, there are 2 ways to calculate mAP(Assume that dataset is all labeled image).

1. Split dataset ratio as Train: Val=80:20, and just finish mAP calculation on Val dataset.

2.Split dataset ratio as Train: Val: Test=60:20:20--> After calculating mAP on Val dataset, only one more do mAP calculation on test dataset to assess the performance of a fully trained model. Only 1 epoch mAP calculation on Test dataset would be enough because it is already fully trained on Val dataset. In some references said, during training, Val dataset is already referred to on the training dataset, and therefore, calculating mAP is a more accurate way.

I think the first one was is correct, but in my research group discussion that someone said the second one is correct. Still, I'm not sure which one is correct. Can you give me some advice or references on this?

glenn-jocher commented 3 years ago

@Ronald-Kray most common datasets include predefined splits. If you have a custom dataset you can use our autosplit() function below with default train/val/test splits of 90/10/0:

https://github.com/ultralytics/yolov5/blob/9febea79de895191bd7a375e5c5a61bfa2886c89/utils/datasets.py#L842-L849

Ronald-Kray commented 3 years ago

@glenn-jocher Alright. My question is that when calculating official mAP, Researchers use only a validation dataset? Or they calculate mAP of the test dataset from the model contributed by the validation dataset?

glenn-jocher commented 3 years ago

@Ronald-Kray commands to reproduce official mAP are in README Table Notes: https://github.com/ultralytics/yolov5#pretrained-checkpoints

github-actions[bot] commented 3 years ago

πŸ‘‹ Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 πŸš€ resources:

Access additional Ultralytics ⚑ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 πŸš€ and Vision AI ⭐!

AmelieVernay commented 2 years ago

@bdeng3 yes both are correct. Hyperparameters are evolved for best performance on val set, but the val set never contributes to training losses, so is not involved in gradient backpropagation and thus never directly affects the model.

@glenn-jocher Hello, I understand your answer but I was wondering: if we run train.py without --evolve (default), are there still some hyperparameters that are tuned during training, and if so, which ones? Thank you

glenn-jocher commented 2 years ago

no

AmelieVernay commented 2 years ago

no

Ok thank you.

glenn-jocher commented 11 months ago

@AmelieVernay you're welcome! If you have any more questions, feel free to ask.