Open miltonmondal opened 2 years ago
fvcore will get the output of 4.09G, but it will also print
Skipped operation aten::batch_norm 53 time(s)
Skipped operation aten::max_pool2d 1 time(s)
Skipped operation aten::add_ 16 time(s)
Skipped operation aten::adaptive_avg_pool2d 1 time(s)
Perhaps those papers ignore the computation of some of the operators.
@jkhu29 you're right! ptflops also considers batch norms and poolings as non-zero ops, that's why it outputs slightly greater numbers than expected.
Getting 4.12B flops using your code whereas almost all research papers mentioned 4.09B flops for this configuration
(pytorch default 76.15% test accuracy for pretrained model)
Can you please modify the code or mention the reason for getting 0.03B increase in FLOPs?