Deci-AI / super-gradients

Easily train or fine-tune SOTA computer vision models with one open source training library. The home of Yolo-NAS.
https://www.supergradients.com
Apache License 2.0
4.57k stars 502 forks source link

Considerations for reducing PPYoloELoss/loss_iou #1516

Closed fahadishaq1 closed 1 year ago

fahadishaq1 commented 1 year ago

💡 Your Question

I am training an object detector (yolo_nas_s) on a custom dataset from scratch (i.e. without using pretrained weights). The result is good but for the problem I am trying to solve, I need the predicted bounding boxes to be as close to the labeled bounding boxes as possible. Currently, I am using the following parameters with default transformations and getting a training PPYoloELoss/loss_iou of around 0.06 and validation PPYoloELoss/loss_iou of 0.02:

train_params = { 'silent_mode': False, "average_best_models":True, "warmup_mode": "linear_epoch_step", "warmup_initial_lr": 1e-6, "lr_warmup_epochs": 3, "initial_lr": 5e-4, "lr_mode": "cosine", "cosine_final_lr_ratio": 0.1, "optimizer": "Adam", "optimizer_params": {"weight_decay": 0.0001}, "zero_weight_decay_on_bias_and_bn": True, "ema": True, "ema_params": {"decay": 0.9, "decay_type": "threshold"}, "max_epochs": EPOCHS, "mixed_precision": True, "loss": PPYoloELoss( use_static_assigner=False, num_classes=len(dataset_params['classes']), reg_max=16 ), "valid_metrics_list": [ DetectionMetrics_050( score_thres=0.1, top_k_predictions=300, num_cls=len(dataset_params['classes']), normalize_targets=True, post_prediction_callback=PPYoloEPostPredictionCallback( score_threshold=0.01, nms_top_k=1000, max_predictions=300, nms_threshold=0.7 ) ), DetectionMetrics_050_095( score_thres=0.1, top_k_predictions=300, num_cls=len(dataset_params['classes']), normalize_targets=True, post_prediction_callback=PPYoloEPostPredictionCallback( score_threshold=0.01, nms_top_k=1000, max_predictions=300, nms_threshold=0.7 ) ) ], "metric_to_watch": 'mAP@0.50:0.95' }

yolov5 reduces the box_loss(box regression loss) quite low for the same dataset. Are there any suggestions to improve?

Versions

No response

shaydeci commented 1 year ago

Well, first idea is to increase the iou weight. This can be done through iou_loss_weight argument in PPYoloELoss.

@BloodAxe Any more sophisticated ideas perhaps?

BloodAxe commented 1 year ago

I think it is not fair to compare losses of yolov5 and yolo_nas_s for a number of reasons: 1) We may be using different IoU loss functions (Nowadays no one is using vanilla IoU for box regression, it is either GIoU/DIoU/CIoU or EIoU). I'm not familiar with yolov5 to say which one they use, in YoloNAS we are using GIoU 2) Regardless of IoU loss flavor, there is also IoU weight hyperparameter that controls gain of this component to total loss. 99% these are different between YoloV5 and YoloNAS too. 3) There is also per-anchor score multiplier that also affect the IoU loss

So all in all it is just incorrect to assume iou losses share the same 'axis'.

mAP/AR metrics are better alternative to compare, IMO.

As @shaydeci pointed out - you certainly can increase IoU and DFL loss weights if you willing your model to focus on accurate regression.

fahadishaq1 commented 1 year ago

Thanks for the pointers.

Increasing the weight does not seem to make a significant difference in a trial I did for now. But it does answer my question.