val-iisc / VL2V-ADiP

[CVPR 2024] Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification
http://val.cds.iisc.ac.in/VL2V-ADiP/
MIT License
23 stars 0 forks source link

about white box setting code #1

Open downdric opened 2 months ago

downdric commented 2 months ago

This work on DG is very impressive! If possible, could you share the code with the white box setting?

AshishAsokan commented 3 weeks ago

Hi @downdric, thanks for your interest in our work!

The code for the white box setting is included in this codebase. You can run the white box setting on CLIP ViT-B/16 using the following command:

CUDA_VISIBLE_DEVICES=$gpu_id python train_all.py $name \
    --clip_backbone $backbone \
    --swad_fix \
    --lmd $lmd \
    --seed $seed \
    --model_save 100 \
    --data_dir $path \
    --backbone "clip_vit-b16" \
    --algorithm DFC_CLIP_INIT \
    --dataset $dataset \
    --swad True

The above command uses the arguments from the running scripts in the scripts folder. The key changes here compared to the running scripts in the codebase are the --backbone and --algorithm arguments, which refer to the CLIP ViT-B/16 backbone and the algorithm for training using this backbone, respectively. Please try this out and let us know if you face any issues.

Thanks.