Closed zhangcj13 closed 4 years ago
Hi @zhangcj13
python eval.py
?
I have retrained-the no-ext version but the result have further go down to
Easy: 0.8805231726947155
Medium: 0.853272148846525
Hard: 0.6632535774650703
(I have modified batch-size from 18 to 8)@zhangcj13 By the way, I am also training a version of mobilenetv2. but with original UCBA upsampling. May I know whether you are using DeconvBN or DeCBA? Will let you know my results after my training is completed
@ckcraig01 train with "DBFace(has_landmark=True, wide=64, has_ext=True, upmode="DeconvBN")" and the change DBface's Mbv3SmallFast with mobilenetv2 , batch-size: 8, height x width: 800x800; the eval result: Easy: 0.8 Medium: 0.78 Hard: 0.66 ;but the model size is larger than the small model, so i was confused ;maybe there is some training tricks or There is something wrong with my edit
the mobilenetv2-DBFace model is like below
@zhangcj13 Thank you for the detailed information. I will let you know the results after my training completed and share with you if I find anything new (btw, the model size for me have also increased to about 8mb, because the author further reduce the mbv3 size to mbv3 small, and SE do help a lot)
Hi @zhangcj13 I am back.
Here is results of mbv2: (please note I still use UCBA) Easy: 0.9233590041918498 Medium: 0.9069792848208638 Hard: 0.7742320493061637
I use the model from: https://github.com/rwightman/pytorch-image-models w/ Aprial 5 2020: 3.5M param MobileNet-V2 100 @ 73% I change the mean to 0.5, std to 1.0, and re-order the input from bgr to rgb for training
Hi @ckcraig01 Thank you for your training information and result,mmm... i confused that the 'upmode' would affect the accuracy so much. I will retrain using UCBA and DeCBA, and share with you if i got the conclusion.
No problem @zhangcj13 From the author, I remember he said in Chinese description that upsample do have some impact, other possible reasons could be (1) the choice of output feature layer [i did not compare with your netron graph carefully] (2) the initial pre-train weight
HI @ckcraig01 me again
the mbnv2-DBface-DeCBA with 120 epochs
Easy: 0.8675595111351598
Medium: 0.8623034808222649
Hard: 0.731627571615204
the mbnv2-DBface-UCBA with only 50 epochs: Easy: 0.9123327693162817 Medium: 0.8925632479833138 Hard: 0.7422191428644876
I gusse the upsample-bilinear layer is fall better than the deconv layer on DBface training. But Opencv does not support the upsampling layer very well if i transerform to onnx model. Did you have any idea?
Hi @zhangcj13 👍
Not sure if this would help, I found in: https://github.com/pytorch/pytorch/blob/68f23d566a69693aa527aab526d8bba1c1bafc66/torch/nn/modules/upsampling.py#L187 Actually use the upsampling layer: https://github.com/pytorch/pytorch/blob/68f23d566a69693aa527aab526d8bba1c1bafc66/torch/nn/modules/upsampling.py#L230
So I replace UpsamplingBilinear2d by Upsample
Origin:
self.up = nn.UpsamplingBilinear2d(scale_factor=2)
Modified:
self.up = nn.Upsample(size=None, scale_factor=2, mode='bilinear', align_corners=True)
May you try to see if this helps?
@ckcraig01
thank you for your advice!
I found opencv have resize_layer could replace Upsampling-Bilinear. Now i am trying to modify the onnx_imorter.cpp file to support upsamplebilinear's import.
您好,将small模型的Mbv3SmallFast改成了mobilenetv2,上采样层设为反卷积,150个epoch后用提供的工具测试法发现easy,Medium只有80%左右,hard只有70%,请问这样修改模型合理么,还是训练上有什么tricks来提高准确度?
请问更改backbone有什么需要注意的么,我写了一个mobilenetv2,训练挺正常,看生成的中间结果也是挺不错的。预测结果全部是错的。能否共享一下您的代码 @zhangcj13
您好,将small模型的Mbv3SmallFast改成了mobilenetv2,上采样层设为反卷积,150个epoch后用提供的工具测试法发现easy,Medium只有80%左右,hard只有70%,请问这样修改模型合理么,还是训练上有什么tricks来提高准确度?