WongKinYiu / yolor

implementation of paper - You Only Learn One Representation: Unified Network for Multiple Tasks (https://arxiv.org/abs/2105.04206)
GNU General Public License v3.0
1.99k stars 518 forks source link

Question - Training with Implicit + Decouple Head + SimOTA + Free Anchor setting #125

Closed DacDinh147 closed 3 years ago

DacDinh147 commented 3 years ago

Hi author. Thank for your great work. I learn from you a lot. I read your log in YOLOV4 development and I would like to learn from your experiment in training with Implicit + Decouple Head + SimOTA + Free Anchor setting.

I reimplement your work in a new framework and adding Decouple Head + SimOTA + Free anchor on top of Implicit A + M

W B Chart 10_20_2021, 2 23 34 AM

Currently, I am training on COCO128 toy dataset from scratch, and the training is very unstable around 4k and 6k iteration, MAPs suddenly drop as show in the above figure. Did you experience this issue in your own training and would you please share some trick or insight to avoid this issue? I appreciate it a lot.

WongKinYiu commented 3 years ago

I think it due to cosine annealing learning rate schedule has greatly change at the period.

image figure

in our yolor-csp experiments, decoupled head +0.5% AP, simOTA +1.0% AP, free anchor almost same AP (+ - 0.2%), multi positive works well for fine-tuning model but usually not work when training from scratch.

DacDinh147 commented 3 years ago

@WongKinYiu Thank you a lot for your quick and detail response. After reading the article, I understand the situation that I was in. I close the question. Have a nice day!

voicccc commented 2 years ago

@WongKinYiu @DacDinh147 Where are these improvements