-
doesn't have to be per image, but at least a global history would be nice. and also saves settings.
-
**Is your feature request related to a problem?**
FIrst I try for making my first lora, i don't any experience before and the result from dataset and config here it's amazing, but when i try write ca…
-
**Details of model being requested**
- Model name: Florence-2
- Source repo link: https://huggingface.co/collections/microsoft/florence-6669f44df0d87d9c3bfb76de
- Research paper link: https://arxiv…
-
In your paper,your dense captioning model can support image retrieval using natural language queries, and can localize these queries in retrieved images. How can I do the retrieval work?
-
I am doing violence detection using video captioning. If I give your model a number of videos containing some type of violence will it be able to tell that in captions?. Example if a tree is on fire i…
-
Hi,
Thanks for sharing your interesting work on image captioning. I wanted to run the pretrained model on a few images of mine to test. Wanted to confirm if its [this ](https://github.com/LuoweiZhou…
-
I found the following error when running the main.py.
D:\Anaconda3\envs\KerasIntepreter\python.exe C:/Users/Dell/Downloads/Compressed/image_captioning-master/main.py
2019-05-21 16:33:16.587042: I …
-
I tried running both Docker images as described and got the same error each time. From the conceptual-captions directory, I ran `python /conceptual-captions/generate_caption.py test_images.txt test_im…
-
Hello, I was using BLIP captioning and I found that captions are getting cut off rather than being complete captions.
Is there a way to extend the length of the tokens/captions like we can do in Kohy…
-
Hi, thanks for your amaizing work, i'm enjoy to use BLIP which demonstrate impressive results:)
Now i have a question: how can i fine tune BLIP for Image Captioning task on custom dataset?
My dat…