Lab-LVM / awesome-VLM

Vision Language Model paper
5 stars 0 forks source link

Awesome Vision Language Model

Overview

Contrastive Learning

Narrow the distance of latent space between the image and text.

PrefixLM

Unified multi-modal architecture consisting of encoder and decoder. Main tasks are image-conditioned text generation/captioning and VQA.

Multi-modal Fusing with Cross Attention

Fuse visual information into a language model decorder using a cross attention mechanism. Main tasks are image captioning and VQA.

Masked-Language Modeling / Image-Text Matching

Combination of MLM and ITM. MLM predicts the masked word by image that annotated more information as bounding box and ITM matches image and caption among many negative captions.

No Training

Without training, just using pretrained model, make two features into one space.