Closed jqsun98 closed 2 years ago
Hi @cambridgeinch
We used this https://github.com/cszn/KAIR/blob/master/utils/utils_modelsummary.py
Here is a code snippet:
model = Restormer().cuda()
from kair_utils.utils_modelsummary import get_model_activation, get_model_flops
with torch.no_grad():
input_dim = (3, 256, 256) # set the input dimension
activations, num_conv2d = get_model_activation(model, input_dim)
print('{:>16s} : {:<.4f} [M]'.format('#Activations', activations/10**6))
print('{:>16s} : {:<d}'.format('#Conv2d', num_conv2d))
flops = get_model_flops(model, input_dim, False)
print('{:>16s} : {:<.4f} [G]'.format('FLOPs', flops/10**9))
num_parameters = sum(map(lambda x: x.numel(), model.parameters()))
print('{:>16s} : {:<.4f} [M]'.format('#Params', num_parameters/10**6))
Thansk so much for your reply. Here, I still have a question about the results in the paper.
In the Table.1 "Image Deraining results", quantitative results of five testing sets for different methods are list. I wonder if all results are re-trained for 300K iterations with 8 GPUs. (mentioned in [(https://github.com/swz30/Restormer/blob/main/Deraining/README.md#training])
In Table.7 and Table.8, the measurements of params and FLOPs on different network settings are listed for the input image of size 256*256. Here I'd like to know how to get the FLOPs. I have tried
thop
, a open-source package "https://github.com/sovrasov/flops-counter.pytorch", but it doesn't work well. The accurate FLOPs for Transformer Block, MergeBlock and other layers cannot be derived usingthop
directly.