Closed Sun-Fan closed 5 years ago
Both TUM and SFAM are a little complex,it doesn't make sense why this architecture could speed up so fast. Both the hourglass TUM and fully connection attention are not friendly to speed. Wait for the code.
TUM is a Thinned U-shape Module, it is not as slow as you think. The parameters of A TUM is even less than a Conv layer 1024x3x3x1024. SFAM is also not as complex. First, concatenation; then implement SE attention for multi-scale features. Only 6 SE attentions. As a comparison, SE-ResNet101 have more and more SE attentions than ResNet101.
i have tried it in darknet ,and where the network get deeper ,then the network will be slower,
@leo-XUKANG very interesting try. I am also curious why M2DET will have such fast speed. Would you share your code of M2DET in darknet?
hello, thanks for your paper. In your paper, you have used 8 Tums.I want to ask if this will cause the low fps. After all, there are many Convolution layers.Thanks!