open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.54k stars 9.46k forks source link

GET wrong FLOPS using /mmdet/tools/get_flops.py #1760

Closed M0reDr1nk closed 4 years ago

M0reDr1nk commented 4 years ago

When I using get_flops.py to test the flops of FCOS detector, the result seems wrong in FPN and FCOS Head. It seems that flops computing is model-independent, so I don't think it is caused by the FCOS model.

I paste part of the flops result get from "get_flops.py" here:


> (bbox_head): FCOSHead(
>     -8.466 GMac, 97.667% MACs, 
>     (loss_cls): FocalLoss(0.0 GMac, -0.000% MACs, )
>     (loss_bbox): IoULoss(0.0 GMac, -0.000% MACs, )
>     (loss_centerness): CrossEntropyLoss(0.0 GMac, -0.000% MACs, )
>     (cls_convs): ModuleList(
>       -4.302 GMac, 49.631% MACs, 
>       (0): ConvModule(
>         -1.075 GMac, 12.408% MACs, 
>         (conv): Conv2d(-1.077 GMac, 12.424% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
>         (gn): GroupNorm(0.0 GMac, -0.000% MACs, 32, 256, eps=1e-05, affine=True)
>         (activate): ReLU(0.001 GMac, -0.016% MACs, inplace=True)
>       )
>       (1): ConvModule(
>         -1.075 GMac, 12.408% MACs, 
>         (conv): Conv2d(-1.077 GMac, 12.424% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
>         (gn): GroupNorm(0.0 GMac, -0.000% MACs, 32, 256, eps=1e-05, affine=True)
>         (activate): ReLU(0.001 GMac, -0.016% MACs, inplace=True)
>       )
>       (2): ConvModule(
>         -1.075 GMac, 12.408% MACs, 
>         (conv): Conv2d(-1.077 GMac, 12.424% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
>         (gn): GroupNorm(0.0 GMac, -0.000% MACs, 32, 256, eps=1e-05, affine=True)
>         (activate): ReLU(0.001 GMac, -0.016% MACs, inplace=True)
>       )
>       (3): ConvModule(
>         -1.075 GMac, 12.408% MACs, 
>         (conv): Conv2d(-1.077 GMac, 12.424% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
>         (gn): GroupNorm(0.0 GMac, -0.000% MACs, 32, 256, eps=1e-05, affine=True)
>         (activate): ReLU(0.001 GMac, -0.016% MACs, inplace=True)
>       )
>     )

Environment:

hellock commented 4 years ago

Thanks for your report. Could you be more specific for model-independent?

@yhcao6 You may add support for the FLOPs computation of GN layers?

M0reDr1nk commented 4 years ago

Thanks for your report. Could you be more specific for model-independent?

@yhcao6 You may add support for the FLOPs computation of GN layers?

Maybe I didn't express it clearly... I think ...we can always get the the corret FLOPs result in case we use the supported layers and supported compute in the model.

感覺似乎是只用支持的操作/層,不管他們怎麽組合/怎麽構成的block/怎麽構成的model按道理應該都是搞得定的。

當時看到結果還比較困惑,看到GN还以為可以像BN壹樣推斷時直接和前面的conv層合到壹起所以沒算,conv層的結果為負不大明白。。。

yhcao6 commented 4 years ago

@M0reDr1nk Sorry that we have no win10 environment to test, but I test FCOS on ubuntu and the results seems correct. Here is my testing command and result:

python tools/get_flops.py configs/fcos/fcos_r50_caffe_fpn_gn_1x_4gpu.py --shape 640

FCOS(
  80.145 GMac, 100.000% MACs,
  (backbone): ResNet(
    31.73 GMac, 39.591% MACs,
    (conv1): Conv2d(0.963 GMac, 1.202% MACs, 3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
    (bn1): BatchNorm2d(0.013 GMac, 0.016% MACs, 64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(0.007 GMac, 0.008% MACs, inplace=True)
    (maxpool): MaxPool2d(0.007 GMac, 0.008% MACs, kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
    (layer1): Sequential(
      5.554 GMac, 6.930% MACs,
      (0): Bottleneck(
        1.93 GMac, 2.408% MACs,
        (conv1): Conv2d(0.105 GMac, 0.131% MACs, 64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(0.003 GMac, 0.004% MACs, 64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(0.944 GMac, 1.178% MACs, 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(0.003 GMac, 0.004% MACs, 64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(0.419 GMac, 0.523% MACs, 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(0.013 GMac, 0.016% MACs, 256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(0.01 GMac, 0.012% MACs, inplace=True)
        (downsample): Sequential(
          0.433 GMac, 0.540% MACs,
          (0): Conv2d(0.419 GMac, 0.523% MACs, 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(0.013 GMac, 0.016% MACs, 256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): Bottleneck(
        1.812 GMac, 2.261% MACs,
        (conv1): Conv2d(0.419 GMac, 0.523% MACs, 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(0.003 GMac, 0.004% MACs, 64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(0.944 GMac, 1.178% MACs, 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(0.003 GMac, 0.004% MACs, 64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(0.419 GMac, 0.523% MACs, 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(0.013 GMac, 0.016% MACs, 256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(0.01 GMac, 0.012% MACs, inplace=True)
      )
      (2): Bottleneck(
        1.812 GMac, 2.261% MACs,
        (conv1): Conv2d(0.419 GMac, 0.523% MACs, 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(0.003 GMac, 0.004% MACs, 64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(0.944 GMac, 1.178% MACs, 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(0.003 GMac, 0.004% MACs, 64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(0.419 GMac, 0.523% MACs, 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(0.013 GMac, 0.016% MACs, 256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(0.01 GMac, 0.012% MACs, inplace=True)
      )
    )
    (layer2): Sequential(
      7.825 GMac, 9.764% MACs,
      (0): Bottleneck(
        2.433 GMac, 3.036% MACs,
        (conv1): Conv2d(0.21 GMac, 0.262% MACs, 256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (bn1): BatchNorm2d(0.002 GMac, 0.002% MACs, 128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(0.944 GMac, 1.178% MACs, 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(0.002 GMac, 0.002% MACs, 128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(0.419 GMac, 0.523% MACs, 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(0.007 GMac, 0.008% MACs, 512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(0.005 GMac, 0.006% MACs, inplace=True)
        (downsample): Sequential(
          0.845 GMac, 1.055% MACs,
          (0): Conv2d(0.839 GMac, 1.047% MACs, 256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
          (1): BatchNorm2d(0.007 GMac, 0.008% MACs, 512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): Bottleneck(
        1.797 GMac, 2.243% MACs,
        (conv1): Conv2d(0.419 GMac, 0.523% MACs, 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(0.002 GMac, 0.002% MACs, 128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(0.944 GMac, 1.178% MACs, 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(0.002 GMac, 0.002% MACs, 128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(0.419 GMac, 0.523% MACs, 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(0.007 GMac, 0.008% MACs, 512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(0.005 GMac, 0.006% MACs, inplace=True)
      )
      (2): Bottleneck(
        1.797 GMac, 2.243% MACs,
        (conv1): Conv2d(0.419 GMac, 0.523% MACs, 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(0.002 GMac, 0.002% MACs, 128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(0.944 GMac, 1.178% MACs, 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(0.002 GMac, 0.002% MACs, 128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(0.419 GMac, 0.523% MACs, 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(0.007 GMac, 0.008% MACs, 512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(0.005 GMac, 0.006% MACs, inplace=True)
      )
      (3): Bottleneck(
        1.797 GMac, 2.243% MACs,
        (conv1): Conv2d(0.419 GMac, 0.523% MACs, 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(0.002 GMac, 0.002% MACs, 128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(0.944 GMac, 1.178% MACs, 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(0.002 GMac, 0.002% MACs, 128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(0.419 GMac, 0.523% MACs, 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(0.007 GMac, 0.008% MACs, 512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(0.005 GMac, 0.006% MACs, inplace=True)
      )
    )
    (layer3): Sequential(
      11.372 GMac, 14.189% MACs,
      (0): Bottleneck(
        2.422 GMac, 3.022% MACs,
        (conv1): Conv2d(0.21 GMac, 0.262% MACs, 512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (bn1): BatchNorm2d(0.001 GMac, 0.001% MACs, 256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(0.944 GMac, 1.178% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(0.001 GMac, 0.001% MACs, 256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(0.419 GMac, 0.523% MACs, 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(0.003 GMac, 0.004% MACs, 1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(0.002 GMac, 0.003% MACs, inplace=True)
        (downsample): Sequential(
          0.842 GMac, 1.051% MACs,
          (0): Conv2d(0.839 GMac, 1.047% MACs, 512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
          (1): BatchNorm2d(0.003 GMac, 0.004% MACs, 1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): Bottleneck(
        1.79 GMac, 2.233% MACs,
        (conv1): Conv2d(0.419 GMac, 0.523% MACs, 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(0.001 GMac, 0.001% MACs, 256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(0.944 GMac, 1.178% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(0.001 GMac, 0.001% MACs, 256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(0.419 GMac, 0.523% MACs, 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(0.003 GMac, 0.004% MACs, 1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(0.002 GMac, 0.003% MACs, inplace=True)
      )
      (2): Bottleneck(
        1.79 GMac, 2.233% MACs,
        (conv1): Conv2d(0.419 GMac, 0.523% MACs, 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(0.001 GMac, 0.001% MACs, 256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(0.944 GMac, 1.178% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(0.001 GMac, 0.001% MACs, 256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(0.419 GMac, 0.523% MACs, 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(0.003 GMac, 0.004% MACs, 1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(0.002 GMac, 0.003% MACs, inplace=True)
      )
      (3): Bottleneck(
        1.79 GMac, 2.233% MACs,
        (conv1): Conv2d(0.419 GMac, 0.523% MACs, 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(0.001 GMac, 0.001% MACs, 256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(0.944 GMac, 1.178% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(0.001 GMac, 0.001% MACs, 256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(0.419 GMac, 0.523% MACs, 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(0.003 GMac, 0.004% MACs, 1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(0.002 GMac, 0.003% MACs, inplace=True)
      )
      (4): Bottleneck(
        1.79 GMac, 2.233% MACs,
        (conv1): Conv2d(0.419 GMac, 0.523% MACs, 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(0.001 GMac, 0.001% MACs, 256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(0.944 GMac, 1.178% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(0.001 GMac, 0.001% MACs, 256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(0.419 GMac, 0.523% MACs, 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(0.003 GMac, 0.004% MACs, 1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(0.002 GMac, 0.003% MACs, inplace=True)
      )
      (5): Bottleneck(
        1.79 GMac, 2.233% MACs,
        (conv1): Conv2d(0.419 GMac, 0.523% MACs, 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(0.001 GMac, 0.001% MACs, 256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(0.944 GMac, 1.178% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(0.001 GMac, 0.001% MACs, 256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(0.419 GMac, 0.523% MACs, 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(0.003 GMac, 0.004% MACs, 1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(0.002 GMac, 0.003% MACs, inplace=True)
      )
    )
    (layer4): Sequential(
      5.99 GMac, 7.473% MACs,
      (0): Bottleneck(
        2.417 GMac, 3.016% MACs,
        (conv1): Conv2d(0.21 GMac, 0.262% MACs, 1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (bn1): BatchNorm2d(0.0 GMac, 0.001% MACs, 512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(0.944 GMac, 1.178% MACs, 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(0.0 GMac, 0.001% MACs, 512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(0.419 GMac, 0.523% MACs, 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(0.002 GMac, 0.002% MACs, 2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(0.001 GMac, 0.002% MACs, inplace=True)
        (downsample): Sequential(
          0.84 GMac, 1.049% MACs,
          (0): Conv2d(0.839 GMac, 1.047% MACs, 1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
          (1): BatchNorm2d(0.002 GMac, 0.002% MACs, 2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): Bottleneck(
        1.786 GMac, 2.229% MACs,
        (conv1): Conv2d(0.419 GMac, 0.523% MACs, 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(0.0 GMac, 0.001% MACs, 512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(0.944 GMac, 1.178% MACs, 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(0.0 GMac, 0.001% MACs, 512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(0.419 GMac, 0.523% MACs, 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(0.002 GMac, 0.002% MACs, 2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(0.001 GMac, 0.002% MACs, inplace=True)
      )
      (2): Bottleneck(
        1.786 GMac, 2.229% MACs,
        (conv1): Conv2d(0.419 GMac, 0.523% MACs, 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(0.0 GMac, 0.001% MACs, 512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(0.944 GMac, 1.178% MACs, 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(0.0 GMac, 0.001% MACs, 512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(0.419 GMac, 0.523% MACs, 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(0.002 GMac, 0.002% MACs, 2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(0.001 GMac, 0.002% MACs, inplace=True)
      )
    )
  )
  (neck): FPN(
    6.501 GMac, 8.111% MACs,
    (lateral_convs): ModuleList(
      1.47 GMac, 1.834% MACs,
      (0): ConvModule(
        0.84 GMac, 1.049% MACs,
        (conv): Conv2d(0.84 GMac, 1.049% MACs, 512, 256, kernel_size=(1, 1), stride=(1, 1))
      )
      (1): ConvModule(
        0.42 GMac, 0.524% MACs,
        (conv): Conv2d(0.42 GMac, 0.524% MACs, 1024, 256, kernel_size=(1, 1), stride=(1, 1))
      )
      (2): ConvModule(
        0.21 GMac, 0.262% MACs,
        (conv): Conv2d(0.21 GMac, 0.262% MACs, 2048, 256, kernel_size=(1, 1), stride=(1, 1))
      )
    )
    (fpn_convs): ModuleList(
      5.03 GMac, 6.277% MACs,
      (0): ConvModule(
        3.777 GMac, 4.712% MACs,
        (conv): Conv2d(3.777 GMac, 4.712% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
      (1): ConvModule(
        0.944 GMac, 1.178% MACs,
        (conv): Conv2d(0.944 GMac, 1.178% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
      (2): ConvModule(
        0.236 GMac, 0.295% MACs,
        (conv): Conv2d(0.236 GMac, 0.295% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
      (3): ConvModule(
        0.059 GMac, 0.074% MACs,
        (conv): Conv2d(0.059 GMac, 0.074% MACs, 256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
      )
      (4): ConvModule(
        0.015 GMac, 0.018% MACs,
        (conv): Conv2d(0.015 GMac, 0.018% MACs, 256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
      )
    )
  )
  (bbox_head): FCOSHead(
    41.914 GMac, 52.297% MACs,
    (loss_cls): FocalLoss(0.0 GMac, 0.000% MACs, )
    (loss_bbox): IoULoss(0.0 GMac, 0.000% MACs, )
    (loss_centerness): CrossEntropyLoss(0.0 GMac, 0.000% MACs, )
    (cls_convs): ModuleList(
      20.122 GMac, 25.107% MACs,
      (0): ConvModule(
        5.03 GMac, 6.277% MACs,
        (conv): Conv2d(5.028 GMac, 6.274% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (gn): GroupNorm(0.0 GMac, 0.000% MACs, 32, 256, eps=1e-05, affine=True)
        (activate): ReLU(0.002 GMac, 0.003% MACs, inplace=True)
      )
      (1): ConvModule(
        5.03 GMac, 6.277% MACs,
        (conv): Conv2d(5.028 GMac, 6.274% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (gn): GroupNorm(0.0 GMac, 0.000% MACs, 32, 256, eps=1e-05, affine=True)
        (activate): ReLU(0.002 GMac, 0.003% MACs, inplace=True)
      )
      (2): ConvModule(
        5.03 GMac, 6.277% MACs,
        (conv): Conv2d(5.028 GMac, 6.274% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (gn): GroupNorm(0.0 GMac, 0.000% MACs, 32, 256, eps=1e-05, affine=True)
        (activate): ReLU(0.002 GMac, 0.003% MACs, inplace=True)
      )
      (3): ConvModule(
        5.03 GMac, 6.277% MACs,
        (conv): Conv2d(5.028 GMac, 6.274% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (gn): GroupNorm(0.0 GMac, 0.000% MACs, 32, 256, eps=1e-05, affine=True)
        (activate): ReLU(0.002 GMac, 0.003% MACs, inplace=True)
      )
    )
    (reg_convs): ModuleList(
      20.122 GMac, 25.107% MACs,
      (0): ConvModule(
        5.03 GMac, 6.277% MACs,
        (conv): Conv2d(5.028 GMac, 6.274% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (gn): GroupNorm(0.0 GMac, 0.000% MACs, 32, 256, eps=1e-05, affine=True)
        (activate): ReLU(0.002 GMac, 0.003% MACs, inplace=True)
      )
      (1): ConvModule(
        5.03 GMac, 6.277% MACs,
        (conv): Conv2d(5.028 GMac, 6.274% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (gn): GroupNorm(0.0 GMac, 0.000% MACs, 32, 256, eps=1e-05, affine=True)
        (activate): ReLU(0.002 GMac, 0.003% MACs, inplace=True)
      )
      (2): ConvModule(
        5.03 GMac, 6.277% MACs,
        (conv): Conv2d(5.028 GMac, 6.274% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (gn): GroupNorm(0.0 GMac, 0.000% MACs, 32, 256, eps=1e-05, affine=True)
        (activate): ReLU(0.002 GMac, 0.003% MACs, inplace=True)
      )
      (3): ConvModule(
        5.03 GMac, 6.277% MACs,
        (conv): Conv2d(5.028 GMac, 6.274% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (gn): GroupNorm(0.0 GMac, 0.000% MACs, 32, 256, eps=1e-05, affine=True)
        (activate): ReLU(0.002 GMac, 0.003% MACs, inplace=True)
      )
    )
    (fcos_cls): Conv2d(1.572 GMac, 1.961% MACs, 256, 80, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (fcos_reg): Conv2d(0.079 GMac, 0.098% MACs, 256, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (fcos_centerness): Conv2d(0.02 GMac, 0.025% MACs, 256, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (scales): ModuleList(
      0.0 GMac, 0.000% MACs,
      (0): Scale(0.0 GMac, 0.000% MACs, )
      (1): Scale(0.0 GMac, 0.000% MACs, )
      (2): Scale(0.0 GMac, 0.000% MACs, )
      (3): Scale(0.0 GMac, 0.000% MACs, )
      (4): Scale(0.0 GMac, 0.000% MACs, )
    )
  )
)
==============================
Input shape: (3, 640, 640)
Flops: 80.14 GMac
Params: 32.02 M
==============================
hellock commented 4 years ago

1850