Closed Yewen1486 closed 3 months ago
Sorry for the late reply, we are doing other work, we will further improve the project after a period of time, the config on FGVC is:
CUB_Cof = {
"dataset": "CUB_200_2011",
"lr":1e-3,
"wd":0.05,
"a_drop" : 0.1,
"v_drop": 0.1}
nabirds_Cof = {
"dataset": "NABirds",
"lr":2e-4,
"wd":0.05,
"a_drop" : 0.1,
"v_drop": 0.1}
flowers_Cof = {
"dataset": "OxfordFlowers",
"lr":2.5e-3,
"wd":0.01,
"a_drop" : 0.1,
"v_drop": 0.1}
Dogs_Cof = {
"dataset": "StanfordDogs",
"lr":0.00025,
"wd":0.05,
"a_drop" : 0.1,
"v_drop": 0}
Cars_Cof = {
"dataset": "StanfordCars",
"lr":5e-3,
"wd":0.05,
"a_drop" : 0.1,
"v_drop": 0.1}
Thank you, this information is greatly appreciated. We've observed another detail concerning the difference in tunable parameters within the reported and in this depository. Linear(0.10M vs 0.18M), ARC-att(0.15Mvs0.23M)on FGVC. Is this a minor clerical error?
Terribly sorry, this is indeed a clerical error, we refer to the dataset statistics in the Page 16 of 35 of Visual Prompt Tuning paper to calculate the number of parameters, the NABirds dataset should have 555 classes, but the VPT was written with 55 classes, resulting in 500 fewer classes of parameters, which made the overall value lower, thank you very much for your reminder, the correct Linear should be 0.18M, ARCatt is 0.22M and ARC is 0.25M.
Hi, DavidYanAnDe. Thanks for you work for PEFT. Is it possible to obtain your training config on VTAB as soon as possible?
cifar_config ={"dataset" : "cifar",
"lr" : 0.005,
"wd" : 0.01,
"a_drop" : 0.1}
caltech101_config ={"dataset" : "caltech101",
"lr" : 0.003,
"wd" : 0.05,
"a_drop" : 0.1}
dtd_config ={"dataset" : "dtd",
"lr" : 5e-3,
"wd" : 5e-2,
"a_drop" : 0.8}
oxford_flowers102_config ={"dataset" : "oxford_flowers102",
"lr" : 0.005,
"wd" : 0.00005,
"a_drop" : 0.5}
oxford_iiit_pet_config ={"dataset" : "oxford_iiit_pet",
"lr" : 0.01,
"wd" : 0.05,
"a_drop" : 0.1}
svhn_config ={"dataset" : "svhn",
"lr" : 0.02,
"wd" : 0.05,
"a_drop" : 0.1}
sun397_config ={"dataset" : "sun397",
"lr" : 0.005,
"wd" : 0.00005,
"a_drop" : 0.1}
eurosat_config ={"dataset" : "eurosat",
"lr" : 0.003,
"wd" : 0,
"a_drop" : 0.1}
resisc45_config ={"dataset" : "resisc45",
"lr" : 0.01,
"wd" : 0.0,
"a_drop" : 0.5}
patch_camelyon_config ={"dataset" : "patch_camelyon",
"lr" : 0.005,
"wd" : 0.00005,
"a_drop" : 0.1}
diabetic_retinopathy_config ={"dataset" : "diabetic_retinopathy",
"lr" : 0.005,
"wd" : 0.00005,
"a_drop" : 0.1}
clevr_count_config ={"dataset" : "clevr_count",
"lr" : 0.002,
"wd" : 0.05,
"a_drop" : 0.5}
clevr_dist_config ={"dataset" : "clevr_dist",
"lr" : 0.001,
"wd" : 0.05,
"a_drop" : 0.1}
dmlab_config ={"dataset" : "dmlab",
"lr" : 0.005,
"wd" : 0.1,
"a_drop" : 0.1}
kitti_config ={"dataset" : "kitti",
"lr" : 0.01,
"wd" : 0.0,
"a_drop" : 0.1}
smallnorb_azi_config ={"dataset" : "smallnorb_azi",
"lr" : 0.01,
"wd" : 0.005,
"a_drop" : 0.1}
smallnorb_ele_config ={"dataset" : "smallnorb_ele",
"lr" : 0.001,
"wd" : 0.05,
"a_drop" : 0.1}
dsprites_loc_config ={"dataset" : "dsprites_loc",
"lr" : 0.01,
"wd" : 0.0,
"a_drop" : 0.1}
dsprites_ori_config ={"dataset" : "dsprites_ori",
"lr" : 0.005,
"wd" : 0.0,
"a_drop" : 0.5}
Thanks for your reply! I would also like to know what batchsize is.
All batchsizes are 32 as we follow in SSF.
Hi, DavidYanAnDe. Thanks for you work for PEFT. I noticed that the training config used for FGVC in your work are not publicly accessible. We are interested in verifying our module based on your approach and giving proper credit to your research. Is it possible to obtain your training config on FGVC as soon as possible?