HikariTJU / LD

Localization Distillation for Object Detection (CVPR 2022, TPAMI 2023)
Apache License 2.0
355 stars 51 forks source link

kd_loss implementation issue #74

Open ZaberKo opened 11 months ago

ZaberKo commented 11 months ago

Hello, I found that the knowledge_distillation_kl_div_loss() in mmdet/models/losses/kd_loss.py uses a different implementation compared to the normal KL Div definition, which is equivalent to F.kl_div(reduction='mean') instead of F.kl_div(reduction='batchmean') as mentioned in F.kl_div.

kd_loss = F.kl_div(
    F.log_softmax(pred / T, dim=1), target, reduction='none').mean(1) * (
        T * T)

The correct KL Div should be like

kd_loss = F.kl_div(
    F.log_softmax(pred / T, dim=1), target, reduction='none').sum(1) * (
        T * T)

Is there any reason to use the above implementation? Current kl_div is 1/17 smaller than the real kl_div, when gfl reg_max=16.

HikariTJU commented 11 months ago

I remeber that .mean(1) is equal to reduction='batch_mean‘ ?

ZaberKo commented 11 months ago

I remeber that .mean(1) is equal to reduction='batch_mean‘ ?

Here is the source code of F.kl_div: https://github.com/pytorch/pytorch/blob/defa0d3a2d230e5d731d5c443c1b9beda2e7fd93/torch/nn/functional.py#L2949-L2958

And the problem here is that the kd_loss is subsequently averaged by @weighted_loss wrapper.

HikariTJU commented 11 months ago

So batch_mean equals .mean(0)?

ZaberKo commented 11 months ago

So batch_mean equals .mean(0)?

No. "batchmean" means .sum()/batch_size, i.e., .sum(1).mean()

HikariTJU commented 11 months ago

OK, I get your point, you mean mathmatically .sum(1) is the correct implementation and .mean(1)=.sum(1)/16 That's true, but how is it related to batchmean?

ZaberKo commented 10 months ago

OK, I get your point, you mean mathmatically .sum(1) is the correct implementation and .mean(1)=.sum(1)/16 That's true, but how is it related to batchmean?

BTW, I also found that loss_ld used weighted sum and was not divided by avg_factor (i.e. sum of weights). Is this a typo or intended behavior for not using normalization?

ZaberKo commented 10 months ago

FYI: I record the factor ratio avg_factor/(self.reg_max+1) during the training. Maybe it will help this discussion.

image
HikariTJU commented 10 months ago

It's a intended behavior because experiment shows not dividing is better. Don't know the theory behind this though

ZaberKo commented 10 months ago

It's a intended behavior because experiment shows not dividing is better. Don't know the theory behind this though

I see, thanks for the reply.