Closed bonlycpe closed 2 years ago
👋 Hello @conandoor1, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.
For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com.
Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
I have found this one
@conandoor1
python val.py --data DATA.yaml --weights MODEL.pt
thank ^^ , And another question I have see that in train.py have use val.py too that mean every train it will automatically test in the end right , I'm confused about optional in this tutorials that have test too.
@conandoor1 yes train.py runs validation every epoch and after training completes on best.pt.
@glenn-jocher Thank ^^
@glenn-jocher how can I get result like in tutorial I away got like this
@conandoor1 the second pic you have there is showing pycocotools results. These are only applied when evaluating the COCO dataset.
@glenn-jocher Thank ^^
@glenn-jocher Hello! I get wildly different AP results when comparing val.py
and testing myself with pycocotools:
python3 val.py ...
Class Images Labels P R mAP@.5 mAP@.5:.95: 100%|██████████| 32/32 [00:49<00:00, 1.55s/it]
all 1024 678 0.546 0.494 0.518 0.221
Speed: 0.4ms pre-process, 2.7ms inference, 0.5ms NMS per image at shape (32, 3, 1280, 1280)
* *
| |
python3 val.py --save-json ...
; then adjust the json and use pycocotools to get average-recall tables.
Accumulating evaluation results...
DONE (t=0.03s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.127 *---------
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.323 *---------
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.081
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.039
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.083
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.191
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.177
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.186
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.186
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.124
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.174
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.218
Could you please shed some light on why is this happening and whether it is normal?
I've had this scame scenario happen too with YoloR. Back in the day I got the same-ish custom (6 class) AP results with pycocotools and darknet... pycocotools repo hasnt changed ever since so it is my understanding pycocotools metrics should be fine for custom data too.
@pabsan-0 👋 hi, thanks for letting us know about this possible problem with YOLOv5 🚀. We've created a few short guidelines below to help users provide what we need in order to start investigating a possible problem.
When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:
For Ultralytics to provide assistance your code should also be:
git pull
or git clone
a new copy to ensure your problem has not already been solved in master.If you believe your problem meets all the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template with a minimum reproducible example to help us better understand and diagnose your problem.
Thank you! 😃
@glenn-jocher Hello! Thanks for your reply. After reading your previous answer in this thread I though you discouraged COCO metrics for non-COCO datasets and I assumed either yolov5 somehow just would not play nice with external COCO validation or that this diff was my data's fault.
Then I saw this other issue in which you state --conf-thresh should be 0.001. I made this correction and both validation approaches seem to yield the same results now (with the usual 1% diff). Will raise my own issue if something alike comes up again.
Cheers!
@pabsan-0 great, I'm glad your metrics align.
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.
Access additional YOLOv5 🚀 resources:
Access additional Ultralytics ⚡ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!
Search before asking
Question
How can I test yolov5 weight that I custom train my self
Additional
No response