ultralytics / yolov5

YOLOv5 πŸš€ in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
48.85k stars 15.96k forks source link

Zeroes in Hyperparameter Evolution? #9214

Closed HighMans closed 1 year ago

HighMans commented 1 year ago

Search before asking

YOLOv5 Component

Evolution

Bug

When doing evolutions based off the hyp.scratch-high.yaml, I noticed that the degrees, shear, perspective and flipud parameters always stayed zero, despite having a non-zero mutation scale.

What I think is happening is if the initial starting condition of the evolution and the lower limit defined in the meta dictionary is zero, then the parameter will always be zero.

Then when it comes time to do the mutation, since the mutation is based only on a multiplied scalar, any value that is zero stays zero forever.

https://github.com/ultralytics/yolov5/blob/91a81d48fa4e34dbdbaf0e45a1f841c11216aab5/train.py#L598

I'm also not sure if it's possible for a zero to be the output of the mutation process, but if it is -- I think it's possible that it could also be stuck at zero forever too.

https://github.com/ultralytics/yolov5/blob/91a81d48fa4e34dbdbaf0e45a1f841c11216aab5/train.py#L575-L599

Environment

No response

Minimal Reproducible Example

from pprint import pprint

hyp = {'anchor_t': 0,
       'anchors': 0,
       'box': 0,
       'cls': 0,
       'cls_pw': 0,
       'copy_paste': 0,
       'degrees': 0,
       'fl_gamma': 0,
       'fliplr': 0,
       'flipud': 0,
       'hsv_h': 0,
       'hsv_s': 0,
       'hsv_v': 0,
       'iou_t': 0,
       'lr0': 0,
       'lrf': 0,
       'mixup': 0,
       'momentum': 0,
       'mosaic': 0,
       'obj': 0,
       'obj_pw': 0,
       'perspective': 0.0,
       'scale': 0.0,
       'shear': 0.0,
       'translate': 0.0,
       'warmup_bias_lr': 0.0,
       'warmup_epochs': 0.0,
       'warmup_momentum': 0.0,
       'weight_decay': 0.0}

meta = {
    'lr0': (1, 1e-5, 1e-1),  # initial learning rate (SGD=1E-2, Adam=1E-3)
    'lrf': (1, 0.01, 1.0),  # final OneCycleLR learning rate (lr0 * lrf)
    'momentum': (0.3, 0.6, 0.98),  # SGD momentum/Adam beta1
    'weight_decay': (1, 0.0, 0.001),  # optimizer weight decay
    'warmup_epochs': (1, 0.0, 5.0),  # warmup epochs (fractions ok)
    'warmup_momentum': (1, 0.0, 0.95),  # warmup initial momentum
    'warmup_bias_lr': (1, 0.0, 0.2),  # warmup initial bias lr
    'box': (1, 0.02, 0.2),  # box loss gain
    'cls': (1, 0.2, 4.0),  # cls loss gain
    'cls_pw': (1, 0.5, 2.0),  # cls BCELoss positive_weight
    'obj': (1, 0.2, 4.0),  # obj loss gain (scale with pixels)
    'obj_pw': (1, 0.5, 2.0),  # obj BCELoss positive_weight
    'iou_t': (0, 0.1, 0.7),  # IoU training threshold
    'anchor_t': (1, 2.0, 8.0),  # anchor-multiple threshold
    'anchors': (2, 2.0, 10.0),  # anchors per output grid (0 to ignore)
    'fl_gamma': (0, 0.0, 2.0),  # focal loss gamma (efficientDet default gamma=1.5)
    'hsv_h': (1, 0.0, 0.1),  # image HSV-Hue augmentation (fraction)
    'hsv_s': (1, 0.0, 0.9),  # image HSV-Saturation augmentation (fraction)
    'hsv_v': (1, 0.0, 0.9),  # image HSV-Value augmentation (fraction)
    'degrees': (1, 0.0, 45.0),  # image rotation (+/- deg)
    'translate': (1, 0.0, 0.9),  # image translation (+/- fraction)
    'scale': (1, 0.0, 0.9),  # image scale (+/- gain)
    'shear': (1, 0.0, 10.0),  # image shear (+/- deg)
    'perspective': (0, 0.0, 0.001),  # image perspective (+/- fraction), range 0-0.001
    'flipud': (1, 0.0, 1.0),  # image flip up-down (probability)
    'fliplr': (0, 0.0, 1.0),  # image flip left-right (probability)
    'mosaic': (1, 0.0, 1.0),  # image mixup (probability)
    'mixup': (1, 0.0, 1.0),  # image mixup (probability)
    'copy_paste': (1, 0.0, 1.0)}  # segment copy-paste (probability)

for k, v in meta.items():
    hyp[k] = max(hyp[k], v[1])  # lower limit
    hyp[k] = min(hyp[k], v[2])  # upper limit
    hyp[k] = round(hyp[k], 5)  # significant digits

pprint(hyp)

Additional

Example code produced from snippit in train.py.

https://github.com/ultralytics/yolov5/blob/91a81d48fa4e34dbdbaf0e45a1f841c11216aab5/train.py#L529-L605

Are you willing to submit a PR?

github-actions[bot] commented 1 year ago

πŸ‘‹ Hello @HighMans, thank you for your interest in YOLOv5 πŸš€! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a πŸ› Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com.

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 1 year ago

@HighMans yes that's correct, zero initial values will stay zero. If you want to mutate values initalize these to non-zero values. See 'intiial conditions' section of hyperparameter tutorial:

Tutorials

Good luck πŸ€ and let us know if you have any other questions!