[X] I have read the README carefully. 我已经仔细阅读了README上的操作指引。
[X] I want to train my custom dataset, and I have read the tutorials for training your custom data carefully and organize my dataset correctly; (FYI: We recommand you to apply the config files of xx_finetune.py.) 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。(FYI: 我们推荐使用xx_finetune.py等配置文件训练自定义数据集。)
[X] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
Search before asking
[X] I have searched the YOLOv6 issues and found no similar questions.
Question
您好,我希望在实验中同时使用迁移学习与自蒸馏训练来提升模型性能。具体来讲,我希望首先在足够大的数据集A上对模型进行预训练,然后通过迁移学习在较小的数据集B上进行微调,并在这个过程中使用自蒸馏训练提升模型性能。根据“tutorials for training your custom data”文件描述,我了解到在数据集B上微调过程中进行自蒸馏训练是必要的,但是不知道在数据集A上进行预训练过程中是否也有进行自蒸馏训练的必要,毕竟在预训练阶段进行自蒸馏训练会消耗大量时间。希望根据您在实验中的经验给予我一些建议,不胜感激。
Hello, I hope to use both Transfer learning and self distillation training to improve model performance in the experiment. Specifically, I hope to pre train the model on a sufficiently large dataset A first, then fine tune the model on a smaller dataset B through Transfer learning, and use self distillation training in this process to improve the model performance. According to "[tutorials for training your custom data]( https://github.com/meituan/YOLOv6/blob/main/docs/Train_custom_data.md )”File description: I understand that self distillation training is necessary during the fine-tuning process on dataset B, but I am not sure if it is also necessary to perform self distillation training during the pre training process on dataset A, as it will consume a lot of time during the pre training stage. I would appreciate it if you could provide me with some suggestions based on your experience in the experiment.
Before Asking
[X] I have read the README carefully. 我已经仔细阅读了README上的操作指引。
[X] I want to train my custom dataset, and I have read the tutorials for training your custom data carefully and organize my dataset correctly; (FYI: We recommand you to apply the config files of xx_finetune.py.) 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。(FYI: 我们推荐使用xx_finetune.py等配置文件训练自定义数据集。)
[X] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
Search before asking
Question
您好,我希望在实验中同时使用迁移学习与自蒸馏训练来提升模型性能。具体来讲,我希望首先在足够大的数据集A上对模型进行预训练,然后通过迁移学习在较小的数据集B上进行微调,并在这个过程中使用自蒸馏训练提升模型性能。根据“tutorials for training your custom data”文件描述,我了解到在数据集B上微调过程中进行自蒸馏训练是必要的,但是不知道在数据集A上进行预训练过程中是否也有进行自蒸馏训练的必要,毕竟在预训练阶段进行自蒸馏训练会消耗大量时间。希望根据您在实验中的经验给予我一些建议,不胜感激。
Hello, I hope to use both Transfer learning and self distillation training to improve model performance in the experiment. Specifically, I hope to pre train the model on a sufficiently large dataset A first, then fine tune the model on a smaller dataset B through Transfer learning, and use self distillation training in this process to improve the model performance. According to "[tutorials for training your custom data]( https://github.com/meituan/YOLOv6/blob/main/docs/Train_custom_data.md )”File description: I understand that self distillation training is necessary during the fine-tuning process on dataset B, but I am not sure if it is also necessary to perform self distillation training during the pre training process on dataset A, as it will consume a lot of time during the pre training stage. I would appreciate it if you could provide me with some suggestions based on your experience in the experiment.
Additional
No response