arthurdouillard / incremental_learning.pytorch

A collection of incremental learning paper implementations including PODNet (ECCV20) and Ghost (CVPR-W21).
MIT License
383 stars 60 forks source link

about the podLoss #26

Closed liujianzhao6328057 closed 3 years ago

liujianzhao6328057 commented 3 years ago

Hi,thanks for your great work. I wonder why " a = torch.pow(a, 2) b = torch.pow(b, 2)" in advance ? Thank you. (https://github.com/arthurdouillard/incremental_learning.pytorch/blob/889359036fea30aa5f8dd2b69455bce507dd601c/inclearn/lib/losses/distillation.py#L64)

arthurdouillard commented 3 years ago

Hey,

Squaring the features before doing the POD embedding improves results. This trick has been seen multiple time in features-based distillation, see this paper https://arxiv.org/abs/1612.03928 for more information about it.

Note that the squaring is better for POD, but also for alternative (POD-pixels, POD-channels, etc.).