Closed xuefei1 closed 3 years ago
This is how nn.Linear FLOPs are counted in the current version:
def count_linear(m, x, y): # per output element total_mul = m.in_features # total_add = m.in_features - 1 # total_add += 1 if m.bias is not None else 0 num_elements = y.numel() total_ops = total_mul * num_elements m.total_ops += torch.DoubleTensor([int(total_ops)])
Link: https://github.com/Lyken17/pytorch-OpCounter/blob/master/thop/vision/basic_hooks.py#L131-L140
I noticed that it's not considering whether the Linear layer has bias, although adding the bias probably won't change the overall FLOPs a lot but still we should check for #the bias in the function right?
See https://github.com/Lyken17/pytorch-OpCounter/tree/master/benchmark for the discussion
Basically, for the simplicity, thop only considers multiplication
This is how nn.Linear FLOPs are counted in the current version:
Link: https://github.com/Lyken17/pytorch-OpCounter/blob/master/thop/vision/basic_hooks.py#L131-L140
I noticed that it's not considering whether the Linear layer has bias, although adding the bias probably won't change the overall FLOPs a lot but still we should check for #the bias in the function right?