-
In the paper, you mention that you use this to do image captioning in table 2. However, I do not see the image captioning in this github. Can you tell me how to use it?
thanks,
-
After train the model can we use only target-encoder for down-stream task ?? like- image captioning etc.
-
### Describe your use-case.
There are multiple simple models used in this repository: Blip, Clip and WD-taggers. However, when it comes to detailed description, they are all dwarfed by modern multi…
-
Image Captioning is another popular task for which we should have an example in Model Zoo
-
doesn't have to be per image, but at least a global history would be nice. and also saves settings.
-
Hi,
I tried to see test results on coco caption14 val dataset and visualize it. I downloaded the coco_caption dataset and pretrained models. like base_xe etc. Some of them creating a sentence like "…
tprdk updated
2 years ago
-
- https://arxiv.org/abs/2107.14178
- 2021
画像キャプションは、画像内のオブジェクトの関係性を表すためにシーングラフを使用することで、より優れた性能を達成できることが示されている。
現在のキャプション・エンコーダは、一般的に、グラフ・コンボリューショナル・ネット(GCN)を用いて関係情報を表現し、それをオブジェクト領域の特徴と連結または畳み込みによ…
e4exp updated
2 years ago
-
Are you planning on using the biggest GPT-2 for the image captioning model you are working on?
-
**Describe the bug**
Some songs either include "(Closed Captioning" or are named "*NAME* Closed Captioning" on Youtube Music. Web Scrobbler fails to detect this and includes it in the title, instead …
-
Hi , great work , I think it is good approach to use image captions and text in images for this classification task, I used your captions data for one of my school projects. I would like to know what…