muzairkhattak / multimodal-prompt-learning

[CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".
https://muzairkhattak.github.io/multimodal-prompt-learning/
MIT License
619 stars 43 forks source link

About CLIP zero-shot baseline #26

Closed hooxizz closed 1 year ago

hooxizz commented 1 year ago

Hi there,

I am newbie in VLMs and want to reproduce CLIP zero-shot baseline.

I follow the INSTALL.md and use command like:

bash scripts/zsclip/zeroshot.sh caltech101 vit_b16

I only get 93.3% acc on caltech101 dataset. The reported accuracy is 95.40% in Table 3 (c). Other datasets also have lower accuracy than reported. Did I miss something? I attach the zero-shot log on caltech101. Thanks in advance!

***************
** Arguments **
***************
backbone: 
config_file: configs/trainers/CoOp/vit_b16.yaml
dataset_config_file: configs/datasets/caltech101.yaml
eval_only: True
head: 
load_epoch: None
model_dir: 
no_train: False
opts: []
output_dir: output/ZeroshotCLIP/vit_b16/caltech101
resume: 
root: /home/clayton/Project/clip/multimodal-prompt-learning/DATA/
seed: -1
source_domains: None
target_domains: None
trainer: ZeroshotCLIP
transforms: None
************
** Config **
************
DATALOADER:
  K_TRANSFORMS: 1
  NUM_WORKERS: 8
  RETURN_IMG0: False
  TEST:
    BATCH_SIZE: 100
    SAMPLER: SequentialSampler
  TRAIN_U:
    BATCH_SIZE: 32
    N_DOMAIN: 0
    N_INS: 16
    SAME_AS_X: True
    SAMPLER: RandomSampler
  TRAIN_X:
    BATCH_SIZE: 32
    N_DOMAIN: 0
    N_INS: 16
    SAMPLER: RandomSampler
DATASET:
  ALL_AS_UNLABELED: False
  CIFAR_C_LEVEL: 1
  CIFAR_C_TYPE: 
  NAME: Caltech101
  NUM_LABELED: -1
  NUM_SHOTS: -1
  ROOT: /home/clayton/Project/clip/multimodal-prompt-learning/DATA/
  SOURCE_DOMAINS: ()
  STL10_FOLD: -1
  SUBSAMPLE_CLASSES: all
  TARGET_DOMAINS: ()
  VAL_PERCENT: 0.1
INPUT:
  COLORJITTER_B: 0.4
  COLORJITTER_C: 0.4
  COLORJITTER_H: 0.1
  COLORJITTER_S: 0.4
  CROP_PADDING: 4
  CUTOUT_LEN: 16
  CUTOUT_N: 1
  GB_K: 21
  GB_P: 0.5
  GN_MEAN: 0.0
  GN_STD: 0.15
  INTERPOLATION: bicubic
  NO_TRANSFORM: False
  PIXEL_MEAN: [0.48145466, 0.4578275, 0.40821073]
  PIXEL_STD: [0.26862954, 0.26130258, 0.27577711]
  RANDAUGMENT_M: 10
  RANDAUGMENT_N: 2
  RGS_P: 0.2
  RRCROP_SCALE: (0.08, 1.0)
  SIZE: (224, 224)
  TRANSFORMS: ('random_resized_crop', 'random_flip', 'normalize')
MODEL:
  BACKBONE:
    NAME: ViT-B/16
    PRETRAINED: True
  HEAD:
    ACTIVATION: relu
    BN: True
    DROPOUT: 0.0
    HIDDEN_LAYERS: ()
    NAME: 
  INIT_WEIGHTS: 
OPTIM:
  ADAM_BETA1: 0.9
  ADAM_BETA2: 0.999
  BASE_LR_MULT: 0.1
  GAMMA: 0.1
  LR: 0.002
  LR_SCHEDULER: cosine
  MAX_EPOCH: 200
  MOMENTUM: 0.9
  NAME: sgd
  NEW_LAYERS: ()
  RMSPROP_ALPHA: 0.99
  SGD_DAMPNING: 0
  SGD_NESTEROV: False
  STAGED_LR: False
  STEPSIZE: (-1,)
  WARMUP_CONS_LR: 1e-05
  WARMUP_EPOCH: 1
  WARMUP_MIN_LR: 1e-05
  WARMUP_RECOUNT: True
  WARMUP_TYPE: constant
  WEIGHT_DECAY: 0.0005
OUTPUT_DIR: output/ZeroshotCLIP/vit_b16/caltech101
RESUME: 
SEED: -1
TEST:
  COMPUTE_CMAT: False
  EVALUATOR: Classification
  FINAL_MODEL: last_step
  NO_TEST: False
  PER_CLASS_RESULT: False
  SPLIT: test
TRAIN:
  CHECKPOINT_FREQ: 0
  COUNT_ITER: train_x
  PRINT_FREQ: 5
TRAINER:
  CDAC:
    CLASS_LR_MULTI: 10
    P_THRESH: 0.95
    RAMPUP_COEF: 30
    RAMPUP_ITRS: 1000
    STRONG_TRANSFORMS: ()
    TOPK_MATCH: 5
  COCOOP:
    CTX_INIT: 
    N_CTX: 16
    PREC: fp16
  COOP:
    CLASS_TOKEN_POSITION: end
    CSC: False
    CTX_INIT: 
    N_CTX: 16
    PREC: fp16
  CROSSGRAD:
    ALPHA_D: 0.5
    ALPHA_F: 0.5
    EPS_D: 1.0
    EPS_F: 1.0
  DAEL:
    CONF_THRE: 0.95
    STRONG_TRANSFORMS: ()
    WEIGHT_U: 0.5
  DAELDG:
    CONF_THRE: 0.95
    STRONG_TRANSFORMS: ()
    WEIGHT_U: 0.5
  DDAIG:
    ALPHA: 0.5
    CLAMP: False
    CLAMP_MAX: 1.0
    CLAMP_MIN: -1.0
    G_ARCH: 
    LMDA: 0.3
    WARMUP: 0
  DOMAINMIX:
    ALPHA: 1.0
    BETA: 1.0
    TYPE: crossdomain
  ENTMIN:
    LMDA: 0.001
  FIXMATCH:
    CONF_THRE: 0.95
    STRONG_TRANSFORMS: ()
    WEIGHT_U: 1.0
  IVLP:
    CTX_INIT: a photo of a
    N_CTX_TEXT: 2
    N_CTX_VISION: 2
    PREC: fp16
    PROMPT_DEPTH_TEXT: 9
    PROMPT_DEPTH_VISION: 9
  M3SDA:
    LMDA: 0.5
    N_STEP_F: 4
  MAPLE:
    CTX_INIT: a photo of a
    N_CTX: 2
    PREC: fp16
    PROMPT_DEPTH: 9
  MCD:
    N_STEP_F: 4
  MEANTEACHER:
    EMA_ALPHA: 0.999
    RAMPUP: 5
    WEIGHT_U: 1.0
  MIXMATCH:
    MIXUP_BETA: 0.75
    RAMPUP: 20000
    TEMP: 2.0
    WEIGHT_U: 100.0
  MME:
    LMDA: 0.1
  NAME: ZeroshotCLIP
  SE:
    CONF_THRE: 0.95
    EMA_ALPHA: 0.999
    RAMPUP: 300
  VPT:
    CTX_INIT: a photo of a
    N_CTX_VISION: 2
    PREC: fp16
    PROMPT_DEPTH_VISION: 1
USE_CUDA: True
VERBOSE: True
VERSION: 1
Collecting env info ...
** System info **
PyTorch version: 1.9.0+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.27

Python version: 3.8 (64-bit runtime)
Python platform: Linux-5.4.0-128-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.7.64
GPU models and configuration: GPU 0: NVIDIA TITAN Xp
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==1.9.0+cu111
[pip3] torchaudio==0.9.0
[pip3] torchvision==0.10.0+cu111
[conda] numpy                     1.24.3                   pypi_0    pypi
[conda] torch                     1.9.0+cu111              pypi_0    pypi
[conda] torchaudio                0.9.0                    pypi_0    pypi
[conda] torchvision               0.10.0+cu111             pypi_0    pypi
        Pillow (9.5.0)

Loading trainer: ZeroshotCLIP
Loading dataset: Caltech101
Reading split from /home/clayton/Project/clip/multimodal-prompt-learning/DATA/caltech-101/split_zhou_Caltech101.json
Building transform_train
+ random resized crop (size=(224, 224), scale=(0.08, 1.0))
+ random flip
+ to torch tensor of range [0, 1]
+ normalization (mean=[0.48145466, 0.4578275, 0.40821073], std=[0.26862954, 0.26130258, 0.27577711])
Building transform_test
+ resize the smaller edge to 224
+ 224x224 center crop
+ to torch tensor of range [0, 1]
+ normalization (mean=[0.48145466, 0.4578275, 0.40821073], std=[0.26862954, 0.26130258, 0.27577711])
---------  ----------
Dataset    Caltech101
# classes  100
# train_x  4,128
# val      1,649
# test     2,465
---------  ----------
Loading CLIP (backbone: ViT-B/16)
Prompts: ['a photo of a face.', 'a photo of a leopard.', 'a photo of a motorbike.', 'a photo of a accordion.', 'a photo of a airplane.', 'a photo of a anchor.', 'a photo of a ant.', 'a photo of a barrel.', 'a photo of a bass.', 'a photo of a beaver.', 'a photo of a binocular.', 'a photo of a bonsai.', 'a photo of a brain.', 'a photo of a brontosaurus.', 'a photo of a buddha.', 'a photo of a butterfly.', 'a photo of a camera.', 'a photo of a cannon.', 'a photo of a car side.', 'a photo of a ceiling fan.', 'a photo of a cellphone.', 'a photo of a chair.', 'a photo of a chandelier.', 'a photo of a cougar body.', 'a photo of a cougar face.', 'a photo of a crab.', 'a photo of a crayfish.', 'a photo of a crocodile.', 'a photo of a crocodile head.', 'a photo of a cup.', 'a photo of a dalmatian.', 'a photo of a dollar bill.', 'a photo of a dolphin.', 'a photo of a dragonfly.', 'a photo of a electric guitar.', 'a photo of a elephant.', 'a photo of a emu.', 'a photo of a euphonium.', 'a photo of a ewer.', 'a photo of a ferry.', 'a photo of a flamingo.', 'a photo of a flamingo head.', 'a photo of a garfield.', 'a photo of a gerenuk.', 'a photo of a gramophone.', 'a photo of a grand piano.', 'a photo of a hawksbill.', 'a photo of a headphone.', 'a photo of a hedgehog.', 'a photo of a helicopter.', 'a photo of a ibis.', 'a photo of a inline skate.', 'a photo of a joshua tree.', 'a photo of a kangaroo.', 'a photo of a ketch.', 'a photo of a lamp.', 'a photo of a laptop.', 'a photo of a llama.', 'a photo of a lobster.', 'a photo of a lotus.', 'a photo of a mandolin.', 'a photo of a mayfly.', 'a photo of a menorah.', 'a photo of a metronome.', 'a photo of a minaret.', 'a photo of a nautilus.', 'a photo of a octopus.', 'a photo of a okapi.', 'a photo of a pagoda.', 'a photo of a panda.', 'a photo of a pigeon.', 'a photo of a pizza.', 'a photo of a platypus.', 'a photo of a pyramid.', 'a photo of a revolver.', 'a photo of a rhino.', 'a photo of a rooster.', 'a photo of a saxophone.', 'a photo of a schooner.', 'a photo of a scissors.', 'a photo of a scorpion.', 'a photo of a sea horse.', 'a photo of a snoopy.', 'a photo of a soccer ball.', 'a photo of a stapler.', 'a photo of a starfish.', 'a photo of a stegosaurus.', 'a photo of a stop sign.', 'a photo of a strawberry.', 'a photo of a sunflower.', 'a photo of a tick.', 'a photo of a trilobite.', 'a photo of a umbrella.', 'a photo of a watch.', 'a photo of a water lilly.', 'a photo of a wheelchair.', 'a photo of a wild cat.', 'a photo of a windsor chair.', 'a photo of a wrench.', 'a photo of a yin yang.']
Loading evaluator: Classification
Note that load_model() is skipped as no pretrained model is given (ignore this if it's done on purpose)
Evaluate on the *test* set
=> result
* total: 2,465
* correct: 2,300
* accuracy: 93.3%
* error: 6.7%
* macro_f1: 90.5%
hooxizz commented 1 year ago

Done it. The scripts need to be modified.

Annusha commented 1 year ago

@hooxizz what did you modify in the scripts?

hooxizz commented 1 year ago

Hi @Annusha,

You need to specify a subset (base or new) and you can get the correct results.

For base class:

python train.py \
--root ${DATA} \
--trainer ${TRAINER} \
--dataset-config-file configs/datasets/${DATASET}.yaml \
--config-file configs/trainers/ZS/${CFG}.yaml \
--output-dir output/base2new/train_base/${DATASET}/zero-shot/${TRAINER}/${CFG}/seeds \
--eval-only \
DATASET.SUBSAMPLE_CLASSES base

For new class:

python train.py \
--root ${DATA} \
--trainer ${TRAINER} \
--dataset-config-file configs/datasets/${DATASET}.yaml \
--config-file configs/trainers/ZS/${CFG}.yaml \
--output-dir output/base2new/test_new/${DATASET}/zero-shot/${TRAINER}/${CFG}/seeds \
--eval-only \
DATASET.SUBSAMPLE_CLASSES new

I just copy the modified scripts of mine. You can replace the configs with yours. Just remember to add --eval-only \ and _DATASET.SUBSAMPLECLASSES new.

Hope the answer can help you.