sail-sg / ptp

[CVPR2023] The code for 《Position-guided Text Prompt for Vision-Language Pre-training》
https://arxiv.org/abs/2212.09737
Apache License 2.0
149 stars 4 forks source link
cross-modality vision-language-pretraining vlp

PTP

PWC

PWC

PWC

This repository includes implementations of the following method:

Introduction

The goal of Position-guided Text Prompt (PTP) is to bring position information into conventional Vision-Language Pre-training (VLP) models, as current mainstream e2e VLP models ignore this important cues.

We observe Position information is missed in a well-trained ViLT models.

Our method provide a good altentive for existing object feature based methods (BUTD and the following works).

Some examples of one PTP is show below:

Updates

Installation

Please find installation instructions for PyTorch in INSTALL.md.

Dataset Preparation

You may follow the instructions in DATASET.md to prepare the datasets. Considering the dataset prepartion is very time consuming, we provide detail guidence and provided our trained corpus.

Pretrained & Finetune Models

1. Pre-trained Model

Method Vision Encoder #Images Dataset Pretrained Weights Training Logs
PTP-BLIP ViT-B(DeiT) 4M CC3M+COCO+VG+SBU link link

2. Zero-shot & Fine-tuning Downstream Model

2.1 Captioning

Method B@4 CIDEr Config
PTP-BLIP 40.1 135.0 configs/caption_coco.yaml

2.2 Zero-shot Retrieval

2.2.2 Flickr30K
Method I2T@1 T2I@1 Model Weight Training Logs Config
PTP-BLIP 86.4 67.0 link link configs/retrieval_flickr.yaml

2.3 Retrieval (Fine-tune)

Tip: Please use as large batch size as possible, we experimentally find that the larger batch size leads to better result for this task. Due to memory limiation, we use batch size 24 rather than 28 in original implmentation.

2.3.1 COCO
Method I2T@1 T2I@1 Config
PTP-BLIP 77.6 59.4 configs/retrieval_coco.yaml
2.3.2 Flickr30K
Method I2T@1 T2I@1 Model Weight Training Logs Config
PTP-BLIP 96.1 84.2 link link configs/retrieval_flickr.yaml

2.4 VQA V2

Method Test-dev Test-std Model Weight Training Logs Config
PTP-BLIP 76.02 76.18 link link configs/vqa.yaml

2.5 NLVR

Method Dev Test-P Model Weight Training Logs Config
PTP-BLIP 80.45 80.70 link link configs/nlvr.yaml

Quick Start

Follow the example in GETTING_STARTED.md to start playing vlp models with PTP.

Transfer To Other Architectures

The PTP can easily transfer to other architectures without much effort. Specifically, change your base code with following two steps:

Then train the model with original objectives.

Ackowledgement

This work is mainly based on BLIP and ViLT, thanks for these good baselines. We also refer OSCAR for ablation study and dataset preparation.

License

PTP is released under the Apache 2.0 license.

Contact

Email: awinyimgprocess at gmail dot com

If you have any questions, please email me or open an new issue.

Citation

If you find our work helps, please use the following BibTeX entry for citation.

@article{wang2022ptp,
  title={Position-guided Text Prompt for Vision Language Pre-training},
  author={Wang, Alex Jinpeng and Zhou, Pan and Shou, Mike Zheng and Yan, Shui Cheng},
  journal={https://arxiv.org/abs/2212.09737},
  year={2022}
}