-
안녕하세요. 모델을 훈련시키기 위해 요구하신 데이터셋들을 다운받던 중 EKVQA 경로에 관한 설명이 부족한듯 하여 질문드립니다.
├── coco
│ └── train2017
├── gqa
│ └── images
├── vg
│ ├── VG_100K
│ └── VG_100K_2
└── ekvqa
구성이 이렇게 되어있고, …
-
```
The configure line used in current ubuntu and debian packages of ffmpeg-damnvid
makes the binary unredistributable:
./configure --enable-memalign-hack --enable-libxvid --enable-libx264
--enable…
-
### Feature request
Would you please add an example about how to fine-tune BLIP2 on a VQA dataset?
### Motivation
VQA IS VERY IMPORANT
### Your contribution
Sorry.
ppyt updated
4 months ago
-
为使您的问题得到快速解决,在建立Issues前,请您先通过如下方式搜索是否有相似问题:【搜索issue关键字】【使用labels筛选】【官方文档】
如果您没有查询到相似问题,为快速解决您的提问,建立issue时请提供如下细节信息:
- 标题:简洁、精准概括您的问题,例如“Insufficient Memory xxx" ”
- 版本、环境信息:
1)PaddlePaddle版本…
-
Hi, there. Thanks for sharing your great work.
I wonder if the performance on the recently released QA benchmarks is zeroshot performance. Or rather, whether the original TimeChat weights have been…
-
Hi! Will you release the evaluation code for Cobra? (e.g., evaluation on benchmarks like text-vqa, pope, etc.) Thanks a lot.
-
Thanks to the author for open source a very good model. I noticed that the paper was tested on many benchmarks, but I did not see the relevant code in github. Can you open source the relevant test cod…
-
```
The configure line used in current ubuntu and debian packages of ffmpeg-damnvid
makes the binary unredistributable:
./configure --enable-memalign-hack --enable-libxvid --enable-libx264
--enable…
-
Thanks for your excellent work!
I'm wondering where to get the "quiltvqa_test_w_ans.json", "quiltvqa_test_wo_ans.jsonl", "quiltvqa_red_test_wo_ans.jsonl", and "quiltvqa_red_test_w_ans.json" for evalu…
-
```
The configure line used in current ubuntu and debian packages of ffmpeg-damnvid
makes the binary unredistributable:
./configure --enable-memalign-hack --enable-libxvid --enable-libx264
--enable…