Thank you for providing the details in feature extraction.
I encountered a problem in feature_extraction/coco_CLIP.py. I used the following arguments to be consistent with your paper.
'att_size': 7,
'model_type': 'RN101'
The following error occurs as running line 152 in coco_CLIP.py:
tmp_att, tmp_fc = model.encode_image(image)
Exception has occurred: ValueError (note: full exception trace is shown but execution is paused at: _run_module_as_main)
not enough values to unpack (expected 2, got 1)
File "/workspace/VL_adapter/feature_extraction/Omni_Benchmark_feature_extraction-Copy1.py", line 173, in main
tmp_att, tmp_fc = model.encode_image(image) # expected 1 got 2
File "/workspace/VL_adapter/feature_extraction/Omni_Benchmark_feature_extraction-Copy1.py", line 252, in <module>
main(params)
File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.7/runpy.py", line 193, in _run_module_as_main (Current frame)
"__main__", mod_spec)
Hi, Guys,
Thank you for providing the details in feature extraction.
I encountered a problem in feature_extraction/coco_CLIP.py. I used the following arguments to be consistent with your paper.
The following error occurs as running line 152 in
coco_CLIP.py
:tmp_att, tmp_fc = model.encode_image(image)
The model is imported using VL_adapter/CLIP-ViL/clip/
Can you help me with that?
Thank you so much!!