yshMars / DistilPose

Implementation for: DistilPose: Tokenized Pose Regression with Heatmap Distillation (CVPR2023)
23 stars 3 forks source link

GFLOPs #4

Open amazing-cc opened 1 year ago

amazing-cc commented 1 year ago

I implemented distilpose via mmpose 1.x and found that the GFLOPs don't match what the paper reported?

Input shape: (1, 3, 256, 192) Flops: 2.724G Params: 5.413M

2023/07/04 18:03:50 - mmengine - INFO - arch table

+---------------------------+----------------------+-----------+--------------+ | module | #parameters or shape | #flops | #activations | +---------------------------+----------------------+-----------+--------------+ | model | 5.413M | 2.724G | 19.353M | | backbone | 0.325M | 1.016G | 6.488M | | backbone.conv1 | 1.728K | 21.234M | 0.786M | | backbone.conv1.weight | (64, 3, 3, 3) | | | | backbone.bn1 | 0.128K | 1.573M | 0 | | backbone.bn1.weight | (64,) | | | | backbone.bn1.bias | (64,) | | | | backbone.conv2 | 36.864K | 0.113G | 0.197M | | backbone.conv2.weight | (64, 64, 3, 3) | | | | backbone.bn2 | 0.128K | 0.393M | 0 | | backbone.bn2.weight | (64,) | | | | backbone.bn2.bias | (64,) | | | | backbone.layer1 | 0.286M | 0.879G | 5.505M | | backbone.layer1.0 | 75.008K | 0.23G | 1.966M | | backbone.layer1.1 | 70.4K | 0.216G | 1.18M | | backbone.layer1.2 | 70.4K | 0.216G | 1.18M | | backbone.layer1.3 | 70.4K | 0.216G | 1.18M | | head.tokenhead | 5.088M | 1.708G | 12.865M | | head.tokenhead.keypoin�� | (1, 17, 192) | | | | head.tokenhead.pos_emb�� | (1, 256, 192) | | | | head.tokenhead.patcht�� | 0.59M | 0.151G | 49.152K | | head.tokenhead.patch�� | (192, 3072) | | | | head.tokenhead.patch_�� | (192,) | | | | head.tokenhead.transfo�� | 4.444M | 1.557G | 12.816M | | head.tokenhead.transf�� | 0.37M | 0.13G | 1.068M | | head.tokenhead.transf�� | 0.37M | 0.13G | 1.068M | | head.tokenhead.transf�� | 0.37M | 0.13G | 1.068M | | head.tokenhead.transf�� | 0.37M | 0.13G | 1.068M | | head.tokenhead.transf�� | 0.37M | 0.13G | 1.068M | | head.tokenhead.transf�� | 0.37M | 0.13G | 1.068M | | head.tokenhead.transf�� | 0.37M | 0.13G | 1.068M | | head.tokenhead.transf�� | 0.37M | 0.13G | 1.068M | | head.tokenhead.transf�� | 0.37M | 0.13G | 1.068M | | head.tokenhead.transf�� | 0.37M | 0.13G | 1.068M | | head.tokenhead.transf�� | 0.37M | 0.13G | 1.068M | | head.tokenhead.transf�� | 0.37M | 0.13G | 1.068M | | head.tokenhead.mlp_head | 1.349K | 32.64K | 85 | | head.tokenhead.mlp_he�� | 0.384K | 16.32K | 0 | | head.tokenhead.mlp_he�� | 0.965K | 16.32K | 85 | +---------------------------+----------------------+-----------+--------------+

Can you tell me why this happen?

yshMars commented 1 year ago

We provide tools for testing GFLOPs in our codes. It's in tools -> _testflops.sh . You can run the script and it will disaplay the detailed info about GFLOPs. Then you can compare them to your reimplmented version. Sorry I've been busy recently and I can't do these work these days. If you find anything please tell me in this issue or contact me via email. Thanks a lot.

amazing-cc commented 1 year ago

Thanks for the prompt reply, I did what you said. It looks like the different is mainly from the transformer module, and I'm sure I haven't changed anything about this piece of code(tokenpose). image

Can you give me some advice on what causes this difference? Could it be because of a version issue with PyTorch (Torch 1.7.0 v.s. Torch>=1.11.0)? Or is the old version of MMPOSE not counting accurately enough? thanks again.

yshMars commented 1 year ago

I assume that the difference might be caused by the update of mmcv. If you check get_flops.py, you can find that it's the function _get_model_complexityinfo that does all the gflops computation work, and it's from mmcv. It lies in mmcv/cnn/utils/flops_counter.py . You might have to check the history of _get_model_complexityinfo to find out what was changed.