DAMO-YOLO: a fast and accurate object detection method with some new techs, including NAS backbones, efficient RepGFPN, ZeroHead, AlignedOTA, and distillation enhancement.
Apache License 2.0
3.79k
stars
476
forks
source link
How to freeze backbone when finetune on custom dataset #126
[X] I have read the README carefully. 我已经仔细阅读了README上的操作指引。
[X] I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。
[X] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
Search before asking
[X] I have searched the DAMO-YOLO issues and found no similar questions.
Question
Hi, I would like to know if there is an option to freeze backbone when finetuning on custom dataset. Because according to the the tutorial, there is only a line to add "self.train.finetune_path" which points to the pretrained .pth weights. However, if I run the train.py file without freezing the backbone, isn't that equal to training from scratch?
Before Asking
[X] I have read the README carefully. 我已经仔细阅读了README上的操作指引。
[X] I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。
[X] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
Search before asking
Question
Hi, I would like to know if there is an option to freeze backbone when finetuning on custom dataset. Because according to the the tutorial, there is only a line to add "self.train.finetune_path" which points to the pretrained .pth weights. However, if I run the train.py file without freezing the backbone, isn't that equal to training from scratch?
Thank you.
Additional
No response