jnhwkim / Pensees

A collection of fragments for reading research papers.
MIT License
6 stars 0 forks source link

BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models #16

Open jnhwkim opened 1 year ago

jnhwkim commented 1 year ago

BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models

Li et al., arXiv 2023

The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from off-the-shelf frozen pre-trained image encoders and frozen large language models. BLIP-2 bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. BLIP-2 achieves state-of-the-art performance on various vision-language tasks, despite having significantly fewer trainable parameters than existing methods. For example, our model outperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. We also demonstrate the model's emerging capabilities of zero-shot image-to-text generation that can follow natural language instructions.

image image image image

πŸ”‘ Key idea:

πŸ’ͺ Strength:

😡 Weakness:

πŸ€” Confidence:

✏️ Memo: