TencentARC / ViT-Lens

[CVPR 2024] ViT-Lens: Towards Omni-modal Representations
https://ailab-cvc.github.io/seed/vitlens/
Other
140 stars 9 forks source link

InstructBLIP and SEED Implementation #14

Open MichaelMaiii opened 4 months ago

MichaelMaiii commented 4 months ago

Hi, I have checked the Clip-Vision embedding (last hidden state) of Blip2&InstructBlip on huggingface (instructblip-vicuna-7b), the dimension is 257x1408. However, the multi-modal matching space of ViT-Lens uses 1x768 dimension. I wonder how to use InstructBlip and Seed for text and image generation directly, have they been fine-tuned?

StanLei52 commented 4 months ago

Hi, please check #11.

For integration, we used the same ViT in InstructBLIP/SEED for ViT-Lens training. FYI, this ckpt in huggingface is for 3D integration. I will upload ckpts for more modalities and open-source this part in the near future.

MichaelMaiii commented 4 months ago

Thanks a lot. It seems that only the 'vitlensL_processors' is available now.

By the way, I notice that SEED-LLaMa has outperformed InstructBlip in image captioning, maybe it's more concise and well-performed to use the SEED-LLaMa for both text and image generation.