Open aeozyalcin opened 1 year ago
@aeozyalcin Based on our experiments, there is no need to freeze any layers during finetuning. Current hyper-parameters (train from scratch on coco) work fine when finetuning custom datasets.
@aeozyalcin This tuning playbook from google-research might help you.
Hello,
I would like to enhance the standard COCO trained YOLOX_tiny model with some additional images from the environment I intend to deploy it in, to improve accuracy and reduce false positives. I have seen tutorials on doing transfer learning/fine tuning on other YOLO variants like Yolov5, where the backbone is frozen, training is done, then backbone is unfrozen and some more training is done with smaller learning rates (fine tuning).
I see that the ability to freeze layers is present in YOLOX, but I haven't been able to find a write up/tutorial on which layers to freeze, what kind of learning parameters to use etc. Any help would be appreciated!