WXinlong / DenseCL

Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021 Oral.
GNU General Public License v3.0
544 stars 70 forks source link
cvpr2021 dense-contrastive-learning densecl self-supervised-learning

Dense Contrastive Learning for Self-Supervised Visual Pre-Training

This project hosts the code for implementing the DenseCL algorithm for self-supervised representation learning.

Dense Contrastive Learning for Self-Supervised Visual Pre-Training,
Xinlong Wang, Rufeng Zhang, Chunhua Shen, Tao Kong, Lei Li
In: Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2021, Oral
arXiv preprint (arXiv 2011.09157)

highlights2

Highlights

highlights

Updates

Installation

Please refer to INSTALL.md for installation and dataset preparation.

Models

For your convenience, we provide the following pre-trained models on COCO or ImageNet.

pre-train method pre-train dataset backbone #epoch training time VOC det VOC seg Link
MoCo-v2 COCO ResNet-50 800 1.0d 54.7 64.5
DenseCL COCO ResNet-50 800 1.0d 56.7 67.5 download
DenseCL COCO ResNet-50 1600 2.0d 57.2 68.0 download
MoCo-v2 ImageNet ResNet-50 200 2.3d 57.0 67.5
DenseCL ImageNet ResNet-50 200 2.3d 58.7 69.4 download
DenseCL ImageNet ResNet-101 200 4.3d 61.3 74.1 download

Note:

We also provide experiments of using DenseCL in AdelaiDet models, e.g., SOLOv2 and FCOS. Please refer to the instructions for simple usage.

pre-train method pre-train dataset mask AP
Supervised ImageNet 35.2
MoCo-v2 ImageNet 35.2
DenseCL ImageNet 35.7 (+0.5)
pre-train method pre-train dataset box AP
Supervised ImageNet 39.9
MoCo-v2 ImageNet 40.3
DenseCL ImageNet 40.9 (+1.0)

Usage

Training

./tools/dist_train.sh configs/selfsup/densecl/densecl_coco_800ep.py 8

Extracting Backbone Weights

WORK_DIR=work_dirs/selfsup/densecl/densecl_coco_800ep/
CHECKPOINT=${WORK_DIR}/epoch_800.pth
WEIGHT_FILE=${WORK_DIR}/extracted_densecl_coco_800ep.pth

python tools/extract_backbone_weights.py ${CHECKPOINT} ${WEIGHT_FILE}

Transferring to Object Detection and Segmentation

Please refer to README.md for transferring to object detection and semantic segmentation. Please refer to the instructions for transferring to dense prediction models in AdelaiDet, e.g., SOLOv2 and FCOS.

Tips

Acknowledgement

We would like to thank the OpenSelfSup for its open-source project and PyContrast for its detection evaluation configs.

Citations

Please consider citing our paper in your publications if the project helps your research. BibTeX reference is as follow.

@inproceedings{wang2020DenseCL,
  title={Dense Contrastive Learning for Self-Supervised Visual Pre-Training},
  author={Wang, Xinlong and Zhang, Rufeng and Shen, Chunhua and Kong, Tao and Li, Lei},
  booktitle =  {Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR)},
  year={2021}
}