facebookresearch / msn

Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)
Other
451 stars 33 forks source link

The detail setting for 1% evaluation #11

Closed Haoqing-Wang closed 2 years ago

Haoqing-Wang commented 2 years ago

I only get 66.9% accuracy for the released ViT-S checkpoint, which is lower than the reported 67.2%. I use the provided default seeting for logistic regression.

cyan.preprocess(embs, normalize=normalize, columns=False, centering=True) classifier = cyan.MultiClassifier(loss='multiclass-logistic', penalty=penalty, fit_intercept=False) classifier.fit(embs, labs, it0=10, lambd=lambd, lambd2=lambd, nthreads=-1, tol=1e-3, solver='auto', seed=0, max_epochs=300)

Besides, I set --blocks=1, --lambd=0.0025, --penalty=l2,--normalize=True. Is there something wrong?

MidoAssran commented 2 years ago

Hi @Haoqing-Wang , apologies for the late reply, but set the regularization strength to 0.075. With this command:

python logistic_eval.py --root-path /datasets01/ --image-folder imagenet_full_size/061417/ --subset-path imagenet_subsets1/1percent.txt --pretrained models/ --fname vits16_800ep.pth.tar --model-name deit_small --lambd 0.075

I get

Screen Shot 2022-08-05 at 3 35 35 PM