Open ZaberKo opened 1 year ago
I remeber that .mean(1) is equal to reduction='batch_mean‘ ?
I remeber that .mean(1) is equal to reduction='batch_mean‘ ?
Here is the source code of F.kl_div
:
https://github.com/pytorch/pytorch/blob/defa0d3a2d230e5d731d5c443c1b9beda2e7fd93/torch/nn/functional.py#L2949-L2958
And the problem here is that the kd_loss
is subsequently averaged by @weighted_loss
wrapper.
So batch_mean equals .mean(0)?
So batch_mean equals .mean(0)?
No. "batchmean" means .sum()/batch_size, i.e., .sum(1).mean()
OK, I get your point, you mean mathmatically .sum(1) is the correct implementation and .mean(1)=.sum(1)/16 That's true, but how is it related to batchmean?
OK, I get your point, you mean mathmatically .sum(1) is the correct implementation and .mean(1)=.sum(1)/16 That's true, but how is it related to batchmean?
BTW, I also found that loss_ld
used weighted sum and was not divided by avg_factor
(i.e. sum of weights). Is this a typo or intended behavior for not using normalization?
FYI: I record the factor ratio avg_factor/(self.reg_max+1)
during the training. Maybe it will help this discussion.
It's a intended behavior because experiment shows not dividing is better. Don't know the theory behind this though
It's a intended behavior because experiment shows not dividing is better. Don't know the theory behind this though
I see, thanks for the reply.
Hello, I found that the
knowledge_distillation_kl_div_loss()
inmmdet/models/losses/kd_loss.py
uses a different implementation compared to the normal KL Div definition, which is equivalent toF.kl_div(reduction='mean')
instead ofF.kl_div(reduction='batchmean')
as mentioned in F.kl_div.The correct KL Div should be like
Is there any reason to use the above implementation? Current kl_div is 1/17 smaller than the real kl_div, when gfl reg_max=16.