dongzelian / SSF

[NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".
https://arxiv.org/pdf/2210.08823.pdf
MIT License
172 stars 12 forks source link

Training script of vtab-cifar100 under full fine-tuning #8

Closed weeknan closed 1 year ago

weeknan commented 1 year ago

Hi! I found it's hard to reproduce the cifar100 full fine-tuning results in Table 4 (68.9). My bast reproduce result is using supervised pretrained vit-b, with lr=0.01, wd=1e-4 and sgd optimizer, and i got 66.3 top acc@1, which has a pretty large gap compared to 68.9. Could you provide the training script of vtab-cifar100 under full fine-tuning setting? Thanks!

dongzelian commented 1 year ago

@weeknan We do not actually run vtab-cifar100 full fine-tuning due to the resource limitation. Following VPT (https://arxiv.org/pdf/2203.12119.pdf) Table 13, you can find the vtab-cifar100 full fine-tuning results. Refer to VPT code (https://github.com/KMnP/vpt) for the specific implementation. Thanks!

weeknan commented 1 year ago

ok,thanks!