eric11220 / pretrained-models-in-CL

Official repository for Do Pre-trained Models Benefit Equally in Continual Learning? (Accepted to WACV'23)
https://eric11220.github.io/publication/WACV23/
25 stars 1 forks source link

The reason of data clip in *ClipImageEncoder* #1

Closed Iranb closed 1 year ago

Iranb commented 1 year ago

Some confusion after reading the code: In clip_encoder.py, lines 38-39

if self.logit_scale > 4.605:
            self.logit_scale.data = torch.tensor(4.605).to(self.logit_scale.device)

What does 4.605 mean? How does this value compute from?

eric11220 commented 1 year ago

It is the threshold that CLIP uses during training. To prevent the scaling factor from going beyond 100 (exp^4.605).