Narrow the distance of latent space between the image and text.
Unified multi-modal architecture consisting of encoder and decoder. Main tasks are image-conditioned text generation/captioning and VQA.
Fuse visual information into a language model decorder using a cross attention mechanism. Main tasks are image captioning and VQA.
Combination of MLM and ITM. MLM predicts the masked word by image that annotated more information as bounding box and ITM matches image and caption among many negative captions.
Without training, just using pretrained model, make two features into one space.