ZhangYuanhan-AI / NOAH

[TPAMI] Searching prompt modules for parameter-efficient transfer learning.
MIT License
225 stars 11 forks source link

REPRODUCIBILITY: getting accuracy - 9.43%, 9.10%, 8.84% with LoRA on cifar100(VTAB). #26

Closed prafful-kumar closed 2 months ago

prafful-kumar commented 2 months ago

Hello,

I've been trying to train LoRA, Adapter, and VPT using the techniques outlined in the repository. However, despite following the instructions and trying different seeds (as shown in the linked folder here), I am only able to achieve proper accuracy with NOAH. The other methods (LoRA, Adapter, and VPT) are not yielding expected results. On VTAB cifar100 dataset, I am getting 9.43%, 9.10%, 8.84% with LoRA.

I've been stuck on this issue for several days and can't figure out what I'm doing wrong. Could you please help me identify the potential issues or suggest any troubleshooting steps I might have missed?

Thank you in advance for your assistance!

slurm file:

set -x

currenttime=date "+%Y%m%d_%H%M%S"

CONFIG=experiments/LoRA/ViT-B_prompt_lora_8.yaml PARTITION='gpu' JOB_NAME=LO-VTAB GPUS=1 CKPT=$2 WEIGHT_DECAY=0.0001

TECHNIQUE=LORA ATTN_MAP=False

GPUS_PER_NODE=1 CPUS_PER_TASK=5 MEMORY=50G # Added memory request TIME=5:00:00 # Added time limit GRES=gpu:tesla-smx2:1 # Adjusted GPU type and count SRUN_ARGS=${SRUN_ARGS:-""}

mkdir -p logs PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \

for LR in 0.001 do for DATASET in cifar100 #caltech101 dtd oxford_flowers102 svhn sun397 oxford_pet patch_camelyon eurosat resisc45 diabetic_retinopathy clevr_count clevr_dist dmlab kitti dsprites_loc dsprites_ori smallnorb_azi smallnorb_ele do export MASTER_PORT=$((12000 + $RANDOM % 20000)) srun -p ${PARTITION} \ --job-name=${JOB_NAME}-${DATASET} \ --gres=${GRES} \ --ntasks=${GPUS} \ --ntasks-per-node=${GPUS_PER_NODE} \ --cpus-per-task=${CPUS_PER_TASK} \ --kill-on-bad-exit=1 \ --mem=${MEMORY} \ --time=${TIME} \ ${SRUN_ARGS} \ python supernet_train_prompt.py --data-path=/scratch/itee/uqpkuma6/PEFT/NOAH/data/vtab-1k/${DATASET} --seed=30 --data-set=${DATASET} --cfg=${CONFIG} --resume=${CKPT} --output_dir=./saves/${DATASET}_lr-${LR}_wd-${WEIGHT_DECAY}_lora_100ep_noaug_xavier_dp01_same-transform_nomixup --batch-size=64 --lr=${LR} --epochs=100 --is_LoRA --weight-decay=${WEIGHT_DECAY} --no_aug --mixup=0 --cutmix=0 --direct_resize --smoothing=0 --launcher="slurm"\ 2>&1 | tee -a logs/${currenttime}-${DATASET}-${LR}-lora.log > /dev/null & echo -e "\033[32m[ Please check log: \"logs/${currenttime}-${DATASET}-${LR}-lora.log\" for details. ]\033[0m" done done