-
When attempting to manually deploy the model to sagemaker via a deployment script or automatically deploying the model via the huggingface inference endpoints UI, I receive the same error:
"ValueEr…
-
To use EgoVLPv2 (specifically EgoTaskQA) on custom VQA dataset,
it is essential to preprocess the metadata about questions ans answers.
Specifically the "answer_encode" is needed.
How can I encod…
-
In the vqa_dataset.py, I don't understand when the split != 'train', self.annotation will be set to 'color_test.json'. Should we modify this name manually when we are testing 'shape' or 'texture'?
!…
k101w updated
6 months ago
-
我应该如何获取VQA-RAD数据集呢?官网数据集里面没有trainset.json文件,请问能否提供VQA-RAD数据集给我呢?
-
hi, thank you for your open source. But this code can't use for blip2. Do you have the code for the blip2 version?
-
When I finetune the model with VQAv2,
It produces a lower score than that proposed in the paper. (43%)
Can you explain where this difference comes from and how to fix it??
-
Hello, thanks for your excellent work!
I'm currently running VQA_RAD and PATH-VQA.
Despite metrics have finished calculated, the progress bar fails to update.
Could you please advise on how t…
-
Appreciate your brilliant work!
I'm much curious about your processing pipeline for splitting frames from Cholec80 videos, could you share the scripts with us?
Thanks
-
I was reading your code and noticed something strange in the VSMForCausalLM class, maybe a small bug, but not sure since I haven't tested your code yet.
Why does images_clip (preprocessed by CLIPProc…
-
Hi,
I am trying to run the code from the KGen_VQA repository, but I encountered an issue due to the missing 'aokvqa_val_kb.pkl' and 'a-okvqa-repre.pkl' files. The code requires these files to execu…