hustvl / YOLOS

[NeurIPS 2021] You Only Look at One Sequence
https://arxiv.org/abs/2106.00666
MIT License
827 stars 118 forks source link

Can you explain why YOLOS-Small has 30 Million parameter while DeiT-S has 22 Million parameter #3

Closed gaopengcuhk closed 3 years ago

gaopengcuhk commented 3 years ago

As the title suggested

Yuxin-CV commented 3 years ago

Hi @gaopengcuhk, thanks for your interest in our work and good question!

For the small- and base-sized model, the added parameters mainly come from positional embeddings (PE): we add randomly initialized (512 / 16) x (864 / 16) PE at every Transformer layer to align with the DETR settings initially. But later we find that interpolate the pre-trained first layer PE to a larger size only, i.e., (800 / 16) x (1344 / 16) and without adding other PEs in intermediate layers can strike a better accuracy & parameter tradeoff. I.e., 36.6 AP v.s. 36.1 AP & 24.6 M (22.1 M + 2.5 M 😄) v.s. 30.7 M (22.1 M+ 8.6 M 😭). The tiny-sized model adopts this configuration.

We have added a detailed description in the Appendix and we will submit it to the arxiv soon (next week, hopefully), the pre-trained model will also be released soon, please stay tuned :)

This issue won't be closed until we update our manuscript on arxiv.

gaopengcuhk commented 3 years ago

Another question, why only add the prediction head on the last layer? Have you tried to add the prediction head to the last several layers like DETR?

Yuxin-CV commented 3 years ago

Another question, why only add the prediction head on the last layer? Have you tried to add the prediction head to the last several layers like DETR?

Thanks for your valuable issue. We have tried this configuration in our early study, which gives no improvements.

The reason we guess is: for DETR, the deep supervision works because the supervision is "deep enough". I.e., the decoders are stacked upon least 50 / 101 layers ResNet backbone and 6 layers Transformer encoders. While YOLOS with a much shallow network cannot benefit from deep supervision.

gaopengcuhk commented 3 years ago

Another question, it seems like you add the position embedding to x every layer. While in Deit, only the first layer add position embedding, is this important in YOLOS?

Yuxin-CV commented 3 years ago

Another question, it seems like you add the position embedding to x every layer. While in Deit, only the first layer add position embedding, is this important in YOLOS?

We have actually answered here: https://github.com/hustvl/YOLOS/issues/3#issuecomment-861146608: YOLOS with only first layer PE added is better in terms of AP and parameter efficiency :)

gaopengcuhk commented 3 years ago

Thank you very much for your reply.

Yuxin-CV commented 3 years ago

This issue won't be closed until we update our manuscript on arxiv.

Yuxin-CV commented 3 years ago

This issue won't be closed until we update our manuscript on arxiv.

We have updated our manuscript on arxiv, and as such I'm closing this issue. Let us know if you have further questions.