wusize / CLIPSelf

[ICLR2024 Spotlight] Code Release of CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction
https://arxiv.org/abs/2310.01403
Other
170 stars 9 forks source link

why class_weight is uneven between categories in F-ViT? #23

Closed Bilibilee closed 5 months ago

Bilibilee commented 5 months ago

link

class_weight = [
    1.0, 1.0, 1.0, 1.0, 0, 0, 1.0, 1.0, 1.0, 1.0, 1.0, 0, 0, 1.0, 1.0, 0, 0,
    1.0, 1.0, 1.0, 1.0, 0, 1.0, 0, 1.0, 1.0, 1.0, 0, 1.0, 0, 1.0, 1.0, 0, 1.0,
    0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0, 1.0, 0, 1.0, 1.0,
    1.0, 1.0, 1.0, 1.0, 0, 1.0, 1.0, 1.0, 0, 1.0, 1.0, 1.0, 1.0, 0, 1.0, 0.6
]

why class_weight is uneven between categories in F-ViT/configs/ov_coco/fvit_vitb16_upsample_fpn_bs64_3e_ovcoco_eva_original.py

wusize commented 5 months ago

The zero-value weights are for novel categories. Ideally, these categories should not appear in the training stage. However, to unify the model configs for training and testing, we set zero-value weights for the novel classes as a workaround.