Open KNITPhoenix opened 6 months ago
Thanks for the reply. I understand that and getting the following error when passing only image to the zero shot model:
File "evaluate_sccpg.py", line 94, in
Since its a zero shot scenario, so I am not passing any annotations for the image and the annotation file being passed is an empty json file. Also, below is the complete shell file for evaluation that I am using:
torchrun --nproc_per_node=1 evaluate_sccpg.py \ --model_name loca_zero_shot \ --data_path /home/spandey8/Object_counting/sccpg_original_data/annotations/ \ --model_path /home/spandey8/Object_counting/loca/pretrained_models \ --backbone resnet50 \ --swav_backbone \ --reduction 8 \ --image_size 512 \ --num_enc_layers 3 \ --num_ope_iterative_steps 3 \ --emb_dim 256 \ --num_heads 8 \ --epochs 200 \ --lr 1e-4 \ --backbone_lr 0 \ --lr_drop 300 \ --weight_decay 1e-4 \ --batch_size 16 \ --dropout 0.1 \ --num_workers 6 \ --max_grad_norm 0.1 \ --aux_weight 0.3 \ --tiling_p 0.5 \ --pre_norm \ --zero_shot
Please guide me in how to proceed for a zero shot evaluation.
Replacing the line that errors out out, _ = model(img)
with out, _ = model(img, None)
should do the trick.
Thanks a lot.
Hi @KNITPhoenix, in the zero-shot setting, we replace the appearance and shape queries by learnable objectness queries and use them as initial prototypes in the OPE module (see Section 3.1.1 in the paper). The model does accept bboxes as parameter, but when
zero_shot
is set to True, it will be ignored (so you can set it to None or anything else).