-
### Feature request
We would like to implement https://github.com/luogen1996/LaVIN#demo and/or https://huggingface.co/docs/transformers/main/model_doc/blip-2as a ReplayStrategyMixin.
In particul…
-
Hello,
I am struggling to trace BLIP2 model from transformers library using `torch_neuronx` to make it work on an inf2. The model I want to trace is the [XXL version](https://huggingface.co/Salesfo…
-
Hi,
Can you add the VQA fine-tuning function of BLIP2?
In the paper, when you fine-tune the VQA task, you will fine-tune the image encoder. When I use the `freeze_vit: False` command.
But I encoun…
zhl98 updated
2 months ago
-
This issue on the LAVIS repository https://github.com/salesforce/LAVIS/issues/118 shows someone who managed to fit BLIP2 onto his 24 GB VRAM card. Would it be possible to apply that to this repo?
-
It's a good work and I 'am very interesting to have a study. Can you provide the code? I do not find the code. thanks.
-
Hi all, this issue will track the feature requests you've made to TensorRT-LLM & provide a place to see what TRT-LLM is currently working on.
Last update: `Jan 14th, 2024`
🚀 = in development
#…
-
Hi guys, I am trying to visualize the attention map of the pre-trained model Blip2-opt-6.7b.
I set the flags related to attention output to **True** and successfully got cross_attentions from the o…
-
hi, thank you for your open source. But this code can't use for blip2. Do you have the code for the blip2 version?
-
Starting from the tutorial [link](https://github.com/salesforce/ALBEF/blob/main/visualization.ipynb) and considering the function **compute_gradcam** in BlipITM [link](https://github.com/salesforce/LA…
-
Hello author, is this a fine-tuning project for BLIP2 on an image captioning dataset?
I am searching everywhere for fine-tuning projects for BLIP2 in image captions, and I hope you can bring me good …