SpursLipu / YOLOv3v4-ModelCompression-MultidatasetTraining-Multibackbone

YOLO ModelCompression MultidatasetTraining
GNU General Public License v3.0
444 stars 136 forks source link

使用知识蒸馏策略四进行训练出现feature mismatch #79

Open shaolingongfuhao opened 3 years ago

shaolingongfuhao commented 3 years ago

怎么才能使feature_s和feature_t长度相同

SpursLipu commented 3 years ago

添加feature loss的整理方法都需要保证层数一致,一般使用原模型蒸馏通道剪植之后的模型。

shaolingongfuhao commented 3 years ago

谢谢你的回复。我的老师模型是YOLOV3,学生模型经过通道和层剪枝的模型。 我是否能够将feature_t截取后半部分与feature_s相同层数的特征来和feature_s求loss,这种方法合适吗? 代码中你写的是哪些层会输出feature_out,有一部分没看懂。

------------------ 原始邮件 ------------------ 发件人: "SpursLipu"<notifications@github.com>; 发送时间: 2020年10月27日(星期二) 中午11:19 收件人: "SpursLipu/YOLOv3v4-ModelCompression-MultidatasetTraining-Multibackbone"<YOLOv3v4-ModelCompression-MultidatasetTraining-Multibackbone@noreply.github.com>; 抄送: "张晨阳"<703931023@qq.com>; "Author"<author@noreply.github.com>; 主题: Re: [SpursLipu/YOLOv3v4-ModelCompression-MultidatasetTraining-Multibackbone] 使用知识蒸馏策略四进行训练出现feature mismatch (#79)

添加feature loss的整理方法都需要保证层数一致,一般使用原模型蒸馏层剪植之后的模型。

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

SpursLipu commented 3 years ago

理论上是可以的,你可以改改试试看

shaolingongfuhao commented 3 years ago

我在compute_lost_KD4代码里改了一下,出现了这种错误,这是为什么?

------------------ 原始邮件 ------------------ 发件人: "SpursLipu"<notifications@github.com>; 发送时间: 2020年10月27日(星期二) 中午11:38 收件人: "SpursLipu/YOLOv3v4-ModelCompression-MultidatasetTraining-Multibackbone"<YOLOv3v4-ModelCompression-MultidatasetTraining-Multibackbone@noreply.github.com>; 抄送: "张晨阳"<703931023@qq.com>; "Author"<author@noreply.github.com>; 主题: Re: [SpursLipu/YOLOv3v4-ModelCompression-MultidatasetTraining-Multibackbone] 使用知识蒸馏策略四进行训练出现feature mismatch (#79)

理论上是可以的,你可以改改试试看

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

shaolingongfuhao commented 3 years ago

刚刚从tensorboard读数据时,还发现这个tensorboard记录数据好像会少记几个点,我微调和蒸馏都是50epoch,但是这些记录都是缺1-3个点,上图分别少了2个和3个,这是什么原因呢

------------------ 原始邮件 ------------------ 发件人: "SpursLipu"<notifications@github.com>; 发送时间: 2020年10月27日(星期二) 中午11:38 收件人: "SpursLipu/YOLOv3v4-ModelCompression-MultidatasetTraining-Multibackbone"<YOLOv3v4-ModelCompression-MultidatasetTraining-Multibackbone@noreply.github.com>; 抄送: "张晨阳"<703931023@qq.com>; "Author"<author@noreply.github.com>; 主题: Re: [SpursLipu/YOLOv3v4-ModelCompression-MultidatasetTraining-Multibackbone] 使用知识蒸馏策略四进行训练出现feature mismatch (#79)

理论上是可以的,你可以改改试试看

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.