ylsung / VL_adapter

PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)
MIT License
204 stars 16 forks source link

Where is the entrance to the setting of half-shared adapters? #15

Closed chenxshuo closed 1 year ago

chenxshuo commented 1 year ago

Hi again, I would like to ask where the entrance to the half-shread adapters is.

On the README, there are only single and multiple adapters.

Thanks in advance!

ylsung commented 1 year ago

Please try the following script, where I add --share_up_sampler in the command of multiple_adapters.sh. Thx.

task=multitask

# or bart
model="bart"

echo $model

if [ $model == "t5" ]
then
    folder_prefix="VLT5"
    backbone="t5-base"
    batch_size=300
elif [ $model == "bart" ]
then
    folder_prefix="VLBart"
    backbone="facebook/bart-base"
    batch_size=500
fi

echo $folder_prefix
echo $backbone

feature=RN101

lr=3e-4
name=4tasks_hard_${feature}_LMUpSharedAdapter+r8+ln_bs${batch_size}_image224_lr${lr}
output=snap/${folder_prefix}_${task}/$name

TOKENIZERS_PARALLELISM=True PYTHONPATH=$PYTHONPATH:./src \
python -m torch.distributed.launch \
    --nproc_per_node=$1 \
    --master_port=26757 \
    src/${task}.py \
    --distributed --multiGPU \
    --optim adamw \
    --warmup_ratio 0.1 \
    --clip_grad_norm 5 \
    --lr ${lr} \
    --epochs 20 \
    --num_workers 4 \
    --backbone ${backbone} \
    --output $output ${@:2} \
    --num_beams 5 \
    --batch_size ${batch_size} \
    --valid_batch_size ${batch_size} \
    --use_adapter \
    --share_up_sampler \
    --unfreeze_layer_norms \
    --reduction_factor 8 \
    --tasks "vqa,gqa,nlvr,caption" \
    --feature ${feature} --n_boxes 36 --downsample \
    --image_size "(224,224)" \
    --run_name $name
chenxshuo commented 1 year ago

Thanks again and let me try this script~ 🔥