dongzelian / SSF

[NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".
https://arxiv.org/pdf/2210.08823.pdf
MIT License
172 stars 12 forks source link

About the performance. #5

Closed weeknan closed 1 year ago

weeknan commented 1 year ago

In the paper, Table 1 shows SSF gets 93.99% top-1 Acc using ViT-B/16 on CIFAR100. However, in Table 4, SSF gets 69.0% top-1 Acc using ViT-B/16 on CIFAR100. Are there any different settings between these two results? It seems like using the same model.

JieShibo commented 1 year ago

Hi. I think the cifar100 dataset in vtab1k benchmark only uses 1000 training samples, while Table 1 shows results on full cifar100 dataset with 50000 training images. There are some details in Table 8.

weeknan commented 1 year ago

i see, thank you!