Open wzf19947 opened 3 days ago
👋 Hello @wzf19947, thank you for your interest in YOLOv5 🚀! This is an automated response to assist you, and an Ultralytics engineer will also look into your issue soon.
If this is a 🐛 Bug Report, could you please provide a minimum reproducible example (MRE)? This would help us understand and debug the issue more effectively.
If this is a question about custom training ❓, please include as much detail as possible, such as dataset image examples and training logs. Also, ensure you are following our tips for best training results.
Make sure you have Python>=3.8.0 installed along with all necessary dependencies, including PyTorch>=1.8. You can set up your environment by cloning the repository and installing the requirements.
YOLOv5 is compatible with various environments, including notebooks with free GPU, Google Cloud Deep Learning VM, Amazon Deep Learning AMI, and Docker image setups.
If the Continuous Integration (CI) tests are passing, it indicates that YOLOv5's training, validation, inference, export, and benchmark scripts are functioning correctly across different operating systems.
Please let us know if you need further assistance! 😊
set imgsz=640 or [640,640] in train.py, it got the same issue, I just want to train a w/h not equal model, but it got so low mAP , how can I solve that?
@wzf19947 to address the mAP discrepancy when using non-square image sizes, ensure that your dataset is well-prepared and the model is trained with appropriate settings. Training with --img-size
set to [640, 640]
should be equivalent to 640
if the aspect ratio is already 1:1. For models with different aspect ratios, consider resizing your images while maintaining aspect ratio or experimenting with multi-scale training to improve performance. If issues persist, verify your setup with the latest YOLOv5 version and review your dataset and augmentation settings for potential improvements.
set imgsz=640 or [640,640] in train.py, it got the same issue, I just want to train a w/h not equal model, but it got so low mAP , how can I solve that?
this one is very strange,I think val with --imgsz= 640 or [640,640] should got same results, but it didn't.
It seems you're experiencing unexpected mAP variations with different image size settings. Ensure your dataset is well-prepared and that your training setup matches your validation conditions, including any augmentations or preprocessing steps. Also, verify that you're using the latest version of YOLOv5, as updates may address underlying issues. If the problem persists, consider checking your dataset for any inconsistencies or running additional tests to isolate the cause.
Search before asking
Question
I trained a model with imgsz of 640, when I val the best.pt, I find imgsz set in val.py got different result, eg, set imgsz=640, I got mAP=0.95, but when I set imgsz=[640,640], I got mAP=0.86, that's a big difference,if I did sth wrong?
Additional
No response