ylsung / VL_adapter

PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)
MIT License
204 stars 16 forks source link

unsuccessful clip feature extraction #10

Open JaniceLC opened 2 years ago

JaniceLC commented 2 years ago

Hi, Guys,

Thank you for providing the details in feature extraction.

I encountered a problem in feature_extraction/coco_CLIP.py. I used the following arguments to be consistent with your paper.

 'att_size': 7,
  'model_type': 'RN101' 

The following error occurs as running line 152 in coco_CLIP.py:

tmp_att, tmp_fc = model.encode_image(image)

Exception has occurred: ValueError       (note: full exception trace is shown but execution is paused at: _run_module_as_main)
not enough values to unpack (expected 2, got 1)
  File "/workspace/VL_adapter/feature_extraction/Omni_Benchmark_feature_extraction-Copy1.py", line 173, in main
    tmp_att, tmp_fc = model.encode_image(image) # expected 1 got 2
  File "/workspace/VL_adapter/feature_extraction/Omni_Benchmark_feature_extraction-Copy1.py", line 252, in <module>
    main(params)
  File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/opt/conda/lib/python3.7/runpy.py", line 193, in _run_module_as_main (Current frame)
    "__main__", mod_spec)

The model is imported using VL_adapter/CLIP-ViL/clip/

Can you help me with that?

Thank you so much!!

JaniceLC commented 2 years ago

Hi, Guys,

I think I found the problem. the CLIP is different in two directories, CLIP-ViL/clip/ and VL-T5/src/clip/.

We should import CLIP from VL-T5/src/clip/. Could you help to mention such info in repo?

Thank you very much!