https://github.com/Lyken17/pytorch-OpCounter/blob/43c064afb71383501e41eaef9e8c8407265cf77f/thop/profile.py#L32
The same count_normalization function is used for every norm-esque module but batchnorms store an estimate mean and stdev, while layernorms calculate them at inference time. Shouldn't layernorms account for the cost of evaluating mean and stdev? The difference is pretty significant:
The mean is n flops, stdev is 2n more flops? and thats before the rest of the norm module which is another 2n.
Is there a reason layernorms should be estimateable as only 2n flops by re-using batchnorm's estimate?
https://github.com/Lyken17/pytorch-OpCounter/blob/43c064afb71383501e41eaef9e8c8407265cf77f/thop/profile.py#L32 The same count_normalization function is used for every norm-esque module but batchnorms store an estimate mean and stdev, while layernorms calculate them at inference time. Shouldn't layernorms account for the cost of evaluating mean and stdev? The difference is pretty significant: The mean is n flops, stdev is 2n more flops? and thats before the rest of the norm module which is another 2n. Is there a reason layernorms should be estimateable as only 2n flops by re-using batchnorm's estimate?