-
I evaluated the VQA and scene cls tasks on the model fine-tuned using GeoChatInstruct, and the results are pretty close to the metrics reported in the paper, however, the region captioning result is …
-
Hi, how can I use the pretrained model 'Conceptual captions' (14pXWwB4Zm82rsDdvbGguLfx9F8aM7ovT") locally?
Thanks!
-
Is this models for Remote-sensing-image-captioning??
because coco images are used and coco images are not RSIC image.
more over , in docs folder , all examples are of natural images.
request y…
-
When connected online the model run just fine, however as soon as I disconnect I get a "failed to resolve hugging.co". I can run other models offline but not this newer one. Is there any way around th…
-
I've downloaded and redownloaded and deleted the file but no matter what I don't see the captioning option on the tools menu. I'm on windows and am on version 30.1.2 of obs.
-
Hitting this issue when decoding: any thoughts?
```
File "/home/ubuntu/wbc/captioning/InternLM-XComposer/projects/ShareGPT4V/run_captioning.py", line 102, in gen_json
captions = eval_mode…
-
Thanks for your interesting work and for sharing the code.
In the README, you only provide examples of how to generate captions for one image at a time (batch size = 1). Could you (@Yushi-Hu) expl…
-
### Question
Hi, thanks for the great work! I have been trying to evaluate llava image captioning on Flickr30k, but I am not able to reproduce the results. While the original llava paper does not hav…
-
Hi,
Firstly, thank you for maintaining such an awesome repository!
I'm particularly interested in using BLIP-2 for image captioning. Could you please provide some guidance on whether it's feasib…
-
Hello, thank you very much for your excellent work. I would like to use your model for some image captioning tasks. Could you please provide some usage instructions? Thank you!