uzh-rpg / svit

Official implementation of "SViT: Revisiting Token Pruning for Object Detection and Instance Segmentation"
Apache License 2.0
23 stars 3 forks source link

FLOPs is not reduced #5

Closed King4819 closed 3 months ago

King4819 commented 4 months ago

I want to ask that: when I calculate the FLOPs of svit-s in classification code, the result FLOPs is still 4.53G, seems like it doesn't reduce FLOPs

kaikai23 commented 4 months ago

Hello, what FLOP calculation tool do you use? In some tools, some operations may be over/under/miss-counted, and the calculated results may be imprecise.

It's more reliable to calculated the flops by hand by taking the average number of tokens used in each block.

An example calculation of vit-s (4.6 GMACs) is like this:

King4819 commented 4 months ago

@kaikai23 Thanks for your reply. I have used fvcore library to calculate FLOPs of the pruned model, but the result is still original FLOPs 4.6G. I hope to use FLOPS calculation tool to verified hand-calculating result.

kaikai23 commented 3 months ago

Hi, I tried using the following code with fvcore,

model.visualize = False
model.statistics = False
model = model.cuda()
model.eval()
input = input.cuda()  # [1, 3, 224, 224]
from fvcore.nn import FlopCountAnalysis
with torch.no_grad():
    flops = FlopCountAnalysis(model, input)
    print(flops.total())
Unsupported operator aten::add encountered 28 time(s)
Unsupported operator aten::_transformer_encoder_layer_fwd encountered 12 time(s)
Unsupported operator aten::gelu encountered 9 time(s)
Unsupported operator aten::log_softmax encountered 9 time(s)
Unsupported operator aten::empty_like encountered 9 time(s)
Unsupported operator aten::exponential_ encountered 9 time(s)
Unsupported operator aten::log encountered 9 time(s)
Unsupported operator aten::neg encountered 9 time(s)
Unsupported operator aten::div encountered 9 time(s)
Unsupported operator aten::softmax encountered 9 time(s)
Unsupported operator aten::scatter_ encountered 9 time(s)
Unsupported operator aten::sub encountered 9 time(s)
Unsupported operator aten::mul encountered 18 time(s)
Unsupported operator aten::rsub encountered 9 time(s)
The following submodules of the model were never called during the trace of the graph. They may be unused, or they were accessed by direct calls to .forward() or via other python methods. In the latter case they will have zeros for statistics, though their statistics will still contribute to their parent calling module.
blocks.0.TransformerEncoderLayer.dropout, blocks.0.TransformerEncoderLayer.dropout1, blocks.0.TransformerEncoderLayer.dropout2, blocks.0.TransformerEncoderLayer.linear1, blocks.0.TransformerEncoderLayer.linear2, blocks.0.TransformerEncoderLayer.norm1, blocks.0.TransformerEncoderLayer.norm2, blocks.0.TransformerEncoderLayer.self_attn, blocks.0.TransformerEncoderLayer.self_attn.out_proj, blocks.1.TransformerEncoderLayer.dropout, blocks.1.TransformerEncoderLayer.dropout1, blocks.1.TransformerEncoderLayer.dropout2, blocks.1.TransformerEncoderLayer.linear1, blocks.1.TransformerEncoderLayer.linear2, blocks.1.TransformerEncoderLayer.norm1, blocks.1.TransformerEncoderLayer.norm2, blocks.1.TransformerEncoderLayer.self_attn, blocks.1.TransformerEncoderLayer.self_attn.out_proj, blocks.10.TransformerEncoderLayer.dropout, blocks.10.TransformerEncoderLayer.dropout1, blocks.10.TransformerEncoderLayer.dropout2, blocks.10.TransformerEncoderLayer.linear1, blocks.10.TransformerEncoderLayer.linear2, blocks.10.TransformerEncoderLayer.norm1, blocks.10.TransformerEncoderLayer.norm2, blocks.10.TransformerEncoderLayer.self_attn, blocks.10.TransformerEncoderLayer.self_attn.out_proj, blocks.11.TransformerEncoderLayer.dropout, blocks.11.TransformerEncoderLayer.dropout1, blocks.11.TransformerEncoderLayer.dropout2, blocks.11.TransformerEncoderLayer.linear1, blocks.11.TransformerEncoderLayer.linear2, blocks.11.TransformerEncoderLayer.norm1, blocks.11.TransformerEncoderLayer.norm2, blocks.11.TransformerEncoderLayer.self_attn, blocks.11.TransformerEncoderLayer.self_attn.out_proj, blocks.2.TransformerEncoderLayer.dropout, blocks.2.TransformerEncoderLayer.dropout1, blocks.2.TransformerEncoderLayer.dropout2, blocks.2.TransformerEncoderLayer.linear1, blocks.2.TransformerEncoderLayer.linear2, blocks.2.TransformerEncoderLayer.norm1, blocks.2.TransformerEncoderLayer.norm2, blocks.2.TransformerEncoderLayer.self_attn, blocks.2.TransformerEncoderLayer.self_attn.out_proj, blocks.3.TransformerEncoderLayer.dropout, blocks.3.TransformerEncoderLayer.dropout1, blocks.3.TransformerEncoderLayer.dropout2, blocks.3.TransformerEncoderLayer.linear1, blocks.3.TransformerEncoderLayer.linear2, blocks.3.TransformerEncoderLayer.norm1, blocks.3.TransformerEncoderLayer.norm2, blocks.3.TransformerEncoderLayer.self_attn, blocks.3.TransformerEncoderLayer.self_attn.out_proj, blocks.4.TransformerEncoderLayer.dropout, blocks.4.TransformerEncoderLayer.dropout1, blocks.4.TransformerEncoderLayer.dropout2, blocks.4.TransformerEncoderLayer.linear1, blocks.4.TransformerEncoderLayer.linear2, blocks.4.TransformerEncoderLayer.norm1, blocks.4.TransformerEncoderLayer.norm2, blocks.4.TransformerEncoderLayer.self_attn, blocks.4.TransformerEncoderLayer.self_attn.out_proj, blocks.5.TransformerEncoderLayer.dropout, blocks.5.TransformerEncoderLayer.dropout1, blocks.5.TransformerEncoderLayer.dropout2, blocks.5.TransformerEncoderLayer.linear1, blocks.5.TransformerEncoderLayer.linear2, blocks.5.TransformerEncoderLayer.norm1, blocks.5.TransformerEncoderLayer.norm2, blocks.5.TransformerEncoderLayer.self_attn, blocks.5.TransformerEncoderLayer.self_attn.out_proj, blocks.6.TransformerEncoderLayer.dropout, blocks.6.TransformerEncoderLayer.dropout1, blocks.6.TransformerEncoderLayer.dropout2, blocks.6.TransformerEncoderLayer.linear1, blocks.6.TransformerEncoderLayer.linear2, blocks.6.TransformerEncoderLayer.norm1, blocks.6.TransformerEncoderLayer.norm2, blocks.6.TransformerEncoderLayer.self_attn, blocks.6.TransformerEncoderLayer.self_attn.out_proj, blocks.7.TransformerEncoderLayer.dropout, blocks.7.TransformerEncoderLayer.dropout1, blocks.7.TransformerEncoderLayer.dropout2, blocks.7.TransformerEncoderLayer.linear1, blocks.7.TransformerEncoderLayer.linear2, blocks.7.TransformerEncoderLayer.norm1, blocks.7.TransformerEncoderLayer.norm2, blocks.7.TransformerEncoderLayer.self_attn, blocks.7.TransformerEncoderLayer.self_attn.out_proj, blocks.8.TransformerEncoderLayer.dropout, blocks.8.TransformerEncoderLayer.dropout1, blocks.8.TransformerEncoderLayer.dropout2, blocks.8.TransformerEncoderLayer.linear1, blocks.8.TransformerEncoderLayer.linear2, blocks.8.TransformerEncoderLayer.norm1, blocks.8.TransformerEncoderLayer.norm2, blocks.8.TransformerEncoderLayer.self_attn, blocks.8.TransformerEncoderLayer.self_attn.out_proj, blocks.9.TransformerEncoderLayer.dropout, blocks.9.TransformerEncoderLayer.dropout1, blocks.9.TransformerEncoderLayer.dropout2, blocks.9.TransformerEncoderLayer.linear1, blocks.9.TransformerEncoderLayer.linear2, blocks.9.TransformerEncoderLayer.norm1, blocks.9.TransformerEncoderLayer.norm2, blocks.9.TransformerEncoderLayer.self_attn, blocks.9.TransformerEncoderLayer.self_attn.out_proj
127357632

the reported FLOPs is only 0.127 GFLOPs. Seems fvcore misses many operations there.

Unfortunately I am also not aware of a tool that can support calculating flops for these operations.