Closed ChchunY closed 7 months ago
Hi, could you provide the scripts you used and print the torch tensor shape for verb_hs and self.verb2hoi_proj. I need more information to solve the issue.
Hi, could you provide the scripts you used and print the torch tensor shape for verb_hs and self.verb2hoi_proj. I need more information to solve the issue. I use the following script: python main.py \ --output_dir hico/hoiclip \ --dataset_file vcoco \ --hoi_path data/v-coco \ --num_obj_classes 81 \ --num_verb_classes 29 \ --backbone resnet50 \ --num_queries 64 \ --dec_layers 3 \ --epochs 90 \ --lr_drop 60 \ --use_nms_filter \ --fix_clip \ --batch_size 8 \ --pretrained params/detr-r50-pre-2branch-vcoco.pth \ --with_clip_label \ --with_obj_clip_label \ --gradient_accumulation_steps 1 \ --num_workers 8 \ --opt_sched "multiStep" \ --dataset_root GEN \ --model_name HOICLIP \ --zero_shot_type default \ --resume hico/hoiclip/checkpoint_last.pth \ --verb_pth ./tmp/verb.pth \ --verb_weight 0.1 \ --training_free_enhancement_path \ ./training_free_ehnahcement/
verb_hs shape: torch.Size([3, 8, 64, 117]) verb2hoi_proj shape: torch.Size([29, 263])
Hi, could you provide the scripts you used and print the torch tensor shape for verb_hs and self.verb2hoi_proj. I need more information to solve the issue. I use the following script: python main.py --output_dir hico/hoiclip --dataset_file vcoco --hoi_path data/v-coco --num_obj_classes 81 --num_verb_classes 29 --backbone resnet50 --num_queries 64 --dec_layers 3 --epochs 90 --lr_drop 60 --use_nms_filter --fix_clip --batch_size 8 --pretrained params/detr-r50-pre-2branch-vcoco.pth --with_clip_label --with_obj_clip_label --gradient_accumulation_steps 1 --num_workers 8 --opt_sched "multiStep" --dataset_root GEN --model_name HOICLIP --zero_shot_type default --resume hico/hoiclip/checkpoint_last.pth --verb_pth ./tmp/verb.pth --verb_weight 0.1 --training_free_enhancement_path ./training_free_ehnahcement/
verb_hs shape: torch.Size([3, 8, 64, 117]) verb2hoi_proj shape: torch.Size([29, 263])
i meet same problem, how do u fix it?
Hi, could you provide the scripts you used and print the torch tensor shape for verb_hs and self.verb2hoi_proj. I need more information to solve the issue. I use the following script: python main.py --output_dir hico/hoiclip --dataset_file vcoco --hoi_path data/v-coco --num_obj_classes 81 --num_verb_classes 29 --backbone resnet50 --num_queries 64 --dec_layers 3 --epochs 90 --lr_drop 60 --use_nms_filter --fix_clip --batch_size 8 --pretrained params/detr-r50-pre-2branch-vcoco.pth --with_clip_label --with_obj_clip_label --gradient_accumulation_steps 1 --num_workers 8 --opt_sched "multiStep" --dataset_root GEN --model_name HOICLIP --zero_shot_type default --resume hico/hoiclip/checkpoint_last.pth --verb_pth ./tmp/verb.pth --verb_weight 0.1 --training_free_enhancement_path ./training_free_ehnahcement/
verb_hs shape: torch.Size([3, 8, 64, 117]) verb2hoi_proj shape: torch.Size([29, 263]) i meet the same problem, how do u fix it?
File "HOICLIP/models/models_hoiclip/hoiclip.py", line 210, in forward outputs_verb_class = logit_scale * self.verb_projection(verb_hs) @ self.verb2hoi_proj RuntimeError: mat1 dim 1 must match mat2 dim 0