Closed suzhenghang closed 4 years ago
Hi, We adopted Pytorch code, which is available online (link), to count FLOPs. We simply added a few lines in to count FLOPs for computing correlation as follows:
def count_correlation(m, x, y):
b,c,h1,w1 = x[0].size()
ops_per_instance = 2*c-1
total_ops = y.numel()*ops_per_instance
m.total_ops += torch.Tensor([int(total_ops)])
Note that we neglect to count FLOPs for few operations, eg. L2-normalization and masking gaussian kernels, of which computation costs are relatively small. We committed our FLOPs counter code to our repository.
Thanks for your attention to our work and questions are always welcomed.
Thanks.
Done. We updated the FLOPs code.
Hi, could you share the code about computing the flops in your paper? Thanks in advance.