Megvii-BaseDetection / YOLOX

YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documentation: https://yolox.readthedocs.io/
Apache License 2.0
9.48k stars 2.21k forks source link

Looking for help with transfer learning/fine tuning YOLOX #1604

Open aeozyalcin opened 1 year ago

aeozyalcin commented 1 year ago

Hello,

I would like to enhance the standard COCO trained YOLOX_tiny model with some additional images from the environment I intend to deploy it in, to improve accuracy and reduce false positives. I have seen tutorials on doing transfer learning/fine tuning on other YOLO variants like Yolov5, where the backbone is frozen, training is done, then backbone is unfrozen and some more training is done with smaller learning rates (fine tuning).

I see that the ability to freeze layers is present in YOLOX, but I haven't been able to find a write up/tutorial on which layers to freeze, what kind of learning parameters to use etc. Any help would be appreciated!

Joker316701882 commented 1 year ago

@aeozyalcin Based on our experiments, there is no need to freeze any layers during finetuning. Current hyper-parameters (train from scratch on coco) work fine when finetuning custom datasets.

FateScript commented 1 year ago

@aeozyalcin This tuning playbook from google-research might help you.