ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.21k stars 16.21k forks source link

How to correctly supply input dimensions to train.py and detect.py #6755

Closed gj-raza closed 2 years ago

gj-raza commented 2 years ago

Search before asking

Question

i generated my dataset from roboflow from 1080x1920 images and resized to 640x640(letterbox resize), now how do i supply it correctly to train and detect script given that they only take one arguement as dimension. How would the scripts resize images for train and infer if i give just --img 640 ?

thanks in advance

Additional

No response

github-actions[bot] commented 2 years ago

👋 Hello @gj-raza, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com.

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

kellymore commented 2 years ago

Hi @gj-raza

I'm the Developer Advocate from Roboflow. As long as you have your dataset and train/test/split images in your YOLOv5-CustomTraining colab notebook, you shouldn't need to change anything when running your train.py except for updating the input size of the model to match what you resized them to.

After you run the cell it should just kick off from there. Here is a video tutorial starting from where this is explained a bit more in depth.

If there was any confusion as to why you didn't have to give width and height dimensions, and only one dimension it is because YOLO operates on square images, so providing the one dimension (width) gives the assumption the other dimension (height) matches it.

glenn-jocher commented 2 years ago

@gj-raza yes for most commands in YOLOv5 you can supply a single --img-size. This is the long side of your image. Short size is handled automatically.

gj-raza commented 2 years ago

@glenn-jocher so this means that if my dataset is 640x640 and i supply --img-size 640, the script will resize my images to 640x384 while training?

glenn-jocher commented 2 years ago

@gj-raza square mosaics are used during training, val.py and detect.py use rectangular inference for pytorch models and square inference for other backends (TF, ONNX, etc.)

gj-raza commented 2 years ago

@glenn-jocher can you please share the reason for this? like why pytorch models use rectangular inference. and say if the model is trained using training data dimensions mentioned above, how will the script resize an image size of 1080x1920 say coming from a feed at infer time, assuming pytorch inference ?

mysteryjeans commented 2 years ago

@gj-raza square mosaics are used during training, val.py and detect.py use rectangular inference for pytorch models and square inference for other backends (TF, ONNX, etc.)

I am handling short size in preprocessing for MS ONNX runtime, for best results on ONNX I should export square shape instead of rectangular?

gj-raza commented 2 years ago

@gj-raza square mosaics are used during training, val.py and detect.py use rectangular inference for pytorch models and square inference for other backends (TF, ONNX, etc.)

I am handling short size in preprocessing for MS ONNX runtime, for best results on ONNX I should export square shape instead of rectangular?

yes.

github-actions[bot] commented 2 years ago

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

00kar commented 2 years ago

@glenn-jocher can you please share the reason for this? like why pytorch models use rectangular inference. and say if the model is trained using training data dimensions mentioned above, how will the script resize an image size of 1080x1920 say coming from a feed at infer time, assuming pytorch inference ?

@glenn-jocher This questions interests me, too.

glenn-jocher commented 2 years ago

@00kar pytorch inference resizes image to --imgsz on long side and handles short side automatically for minimum area that meets stride constraints.

00kar commented 2 years ago

@00kar pytorch inference resizes image to --imgsz on long side and handles short side automatically for minimum area that meets stride constraints.

Thanks for reply. If model has been trained in square images, how well will it work during inference on rectangular images?

glenn-jocher commented 2 years ago

@00kar image size and shape is irrelevant to results. Object size and shape are all that matter.

00kar commented 2 years ago

@00kar image size and shape is irrelevant to results. Object size and shape are all that matter.

Okay, thanks