wyf0912 / LLFlow

[AAAI 2022] The code release of paper "AAAI Low-Light Image Enhancement with Normalizing Flow"
Other
289 stars 36 forks source link

How to calculate FLOPs and #Params for LLFlow? #23

Closed ShenZheng2000 closed 2 years ago

ShenZheng2000 commented 2 years ago

Hello, authors! I am using the following function to calculate the FLOPs, #Params, and the inference time, and this function works for methods like ZeroDCE, RUAS, and URetinexNet.

from thop import profile
import torch
import time
def cal_eff_score(model, count = 100, use_cuda=True):

    # define input tensor
    inp_tensor = torch.rand(1, 3, 1080, 1920) 

    # deploy to cuda
    if use_cuda:
        inp_tensor = inp_tensor.cuda()
        model = model.cuda()

    # get flops and params
    flops, params = profile(model, inputs=(inp_tensor, ))
    G_flops = flops * 1e-9
    M_params = params * 1e-6

    # get time
    start_time = time.time()
    for i in range(count):
        _ = model(inp_tensor)
    used_time = time.time() - start_time
    ave_time = used_time / count

    # print score
    print('FLOPs (G) = {:.4f}'.format(G_flops))
    print('Params (M) = {:.4f}'.format(M_params))
    print('Time (S) = {:.4f}'.format(ave_time))

However, if I pass you model as the variable, it gives me the following error.

Traceback (most recent call last):
  File "test_unpaired.py", line 184, in <module>
    main()
  File "test_unpaired.py", line 135, in main
    cal_eff_score(model)
  File "test_unpaired.py", line 32, in cal_eff_score
    flops, params = profile(model, inputs=(inp_tensor, ))
  File "/root/miniconda3/lib/python3.8/site-packages/thop/profile.py", line 92, in profile
    model(*inputs)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/cuda/amp/autocast_mode.py", line 141, in decorate_autocast
    return func(*args, **kwargs)
  File "/root/autodl-tmp/Model/LLFlow/code/models/modules/LLFlow_arch.py", line 97, in forward
    return self.normal_flow(gt, lr, epses=epses, lr_enc=lr_enc, add_gt_noise=add_gt_noise, step=step,
  File "/root/autodl-tmp/Model/LLFlow/code/models/modules/LLFlow_arch.py", line 121, in normal_flow
    lr_enc = self.rrdbPreprocessing(lr)
  File "/root/autodl-tmp/Model/LLFlow/code/models/modules/LLFlow_arch.py", line 182, in rrdbPreprocessing
    rrdbResults = self.RRDB(lr, get_steps=True)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/autodl-tmp/Model/LLFlow/code/models/modules/ConditionEncoder.py", line 96, in forward
    raw_low_input = x[:, 0:3].exp()
TypeError: 'NoneType' object is not subscriptable

I have spent a lot of time trying to understand the code in your models folder, but it is to complicated for me to understand. Therefore, I hope you can clarify how I can calculate the FLOPs and #Params for your model. Thanks!

wyf0912 commented 2 years ago

We use the code like the following one.

import thop
from thop import clever_format, profile
print(clever_format(profile(model.netG, (None,torch.randn(1,4,400,600).cuda(),torch.randn(1,192,50,75).cuda(), 0 ,True))),"%.5")
wyf0912 commented 2 years ago

More specifically, you can add the aforementioned code to a suitable place of the file "test.py"/"test_unpaired.py", .e.g., after the definition of the model.

ShenZheng2000 commented 2 years ago

Thanks for your explanation. After model.netG = model.netG.cuda(), I made a slight change based on the code you gave, and now it works.

print(clever_format(profile(model.netG, (None,torch.randn(1,6,400,600).cuda(),torch.randn(1,192,50,75).cuda(), 0 ,True))),"%.5")