WXinlong / DenseCL

Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021 Oral.
GNU General Public License v3.0
549 stars 70 forks source link

The performance of detection in VOC #6

Closed jancylee closed 3 years ago

jancylee commented 3 years ago

(8gpus) When I use the pretrained network with coco-800ep-resnet50 to do the detection task with VOC, the "AP" is only 44.76, while you can achieve 56.7. I don't konw why the gap is so large. Note that I change the batchsize from 16 to 8, and as a result, the base lr is set from 0.02 to 0.01.

WXinlong commented 3 years ago

@jancylee Did you directly download the provided pre-trained weights? Please provide your experiment scripts and I will see what's the problem.

jancylee commented 3 years ago

I pretrain the model by myself (coco-800ep-resnet50).

jancylee commented 3 years ago

Compared to your codes, I didn't change any settings in the pretrained process, and I only change the batchsize and base lr in the detection process.

WXinlong commented 3 years ago

Please make sure you have followed the instructions in the readme: https://github.com/WXinlong/DenseCL#extracting-backbone-weights

You have to 1) extract the backbone weights and 2) convert it to detectron2 format.

jancylee commented 3 years ago

I did transform it. By the way, when I train the detection process from scratch (without loading pretrained model), the AP is only 12.8.

WXinlong commented 3 years ago

It looks like the problems are in your detection experiments, not pre-trained weights. You are suggested to first reproduce the detection results using either random init. or supervised pretrained model, i.e., to make sure you can get the same results with the same settings.

jancylee commented 3 years ago

I use 4 gpus, batchsize=4, base lr =0.005, iter=240004, steps = 180004,22000*4
In your settings, 8qpus, batchsize=16, base lr = 0.02, iter=24000, steps=18000,22000 And I download your pretrained model (coco-800ep-resnet50), the performance is AP=51.19, yours is 56.7, it's still a large gap........What should I change settings to achieve 56.7?

zzzzzz0407 commented 3 years ago

@jancylee Please directly copy your training config.yaml instead of several parameters, we should make sure you set the correct parameters (e.g. input format: RGB, pixel mean/std and etc).

jancylee commented 3 years ago

The config.yaml of VOC07&12 object dection: CUDNN_BENCHMARK: false DATALOADER: ASPECT_RATIO_GROUPING: true FILTER_EMPTY_ANNOTATIONS: true NUM_WORKERS: 4 REPEAT_THRESHOLD: 0.0 SAMPLER_TRAIN: TrainingSampler DATASETS: PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000 PRECOMPUTED_PROPOSAL_TOPK_TRAIN: 2000 PROPOSAL_FILES_TEST: [] PROPOSAL_FILES_TRAIN: [] TEST:

jancylee commented 3 years ago

And when I use my preteained model (coco-800ep-resnet50, and the training settings is same to yours) to fine tune the object dection in VOC(the training settings is as above), it's only AP=48.16, compared to AP=51.19(your pretrained model and my fine-tuned object detection) and AP=56.7(result of your paper ).

zzzzzz0407 commented 3 years ago

@jancylee Can you try to use the official model (e.g. mocov2) to reproduce their voc detection performance? https://github.com/open-mmlab/OpenSelfSup/blob/master/docs/MODEL_ZOO.md In my opinion, their is no issue in the config you provided except the batchsize, i mean, maybe too much small bring in a performance drop. You can evaluate it with the official model.

jancylee commented 3 years ago

Thanks a lot. I know the reason, when I use the batchsize=16, it achieves 56.54(conmpared to the paper: 56.7), but this is based on your pretrained model provided by your github website. While when I train the pretrained model by myself, it only achieves 49.78. I completely use your code and follow your pretraining settings. I don't konw why. By the way, I used the moco-v2 pretrained model to fine tune, it can achieves 53.92, which is almost the same as the your paper results.

jancylee commented 3 years ago

I can provide you the training settings later. But the only difference is the workers_per_gpu, I set 8 while you set 4, which is only influence the data loading speed.

zzzzzz0407 commented 3 years ago

@jancylee So the batch size plays a important role to train detection, can you provide the config for training DenseCL?

jancylee commented 3 years ago

2021-03-25 09:31:12,868 - openselfsup - INFO - Environment info:

sys.platform: linux Python: 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0] CUDA available: True CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 10.1, V10.1.243 GPU 0,1,2,3,4,5,6,7: Tesla V100-SXM2-32GB GCC: gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609 PyTorch: 1.4.0 PyTorch compiling details: PyTorch built with:

TorchVision: 0.5.0 OpenCV: 4.5.1 MMCV: 1.0.3 OpenSelfSup: 0.2.0+9e827db

2021-03-25 09:31:12,869 - openselfsup - INFO - Distributed training: True 2021-03-25 09:31:12,869 - openselfsup - INFO - Config: /home/codes/DenseCL/configs/base.py train_cfg = {} test_cfg = {} optimizer_config = dict() # grad_clip, coalesce, bucket_size_mb

yapf:disable

log_config = dict( interval=50, hooks=[ dict(type='TextLoggerHook'), dict(type='TensorboardLoggerHook') ])

yapf:enable

runtime settings

dist_params = dict(backend='nccl') cudnn_benchmark = True log_level = 'INFO' load_from = None resume_from = None workflow = [('train', 1)]

/home/codes/DenseCL/configs/selfsup/densecl/densecl_coco_800ep.py base = '../../base.py'

model settings

model = dict( type='DenseCL', pretrained=None, queue_len=65536, feat_dim=128, momentum=0.999, loss_lambda=0.5, backbone=dict( type='ResNet', depth=50, in_channels=3, out_indices=[4], # 0: conv-1, x: stage-x norm_cfg=dict(type='BN')), neck=dict( type='DenseCLNeck', in_channels=2048, hid_channels=2048, out_channels=128, num_grid=None), head=dict(type='ContrastiveHead', temperature=0.2))

head2=dict(type='TripleHead', margin=0.3),

#head3=dict(type='ContrastiveLXNHead', temperature=0.2))

dataset settings

data_source_cfg = dict( type='COCO', memcached=True, mclient_path='/mnt/lustre/share/memcached_client') data_train_list = ''

data_train_root = '/data2/ImageDataset/coco/train2017/train2017/'

data_train_root = '/home/data/train2017/' dataset_type = 'ContrastiveDataset' img_norm_cfg = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) train_pipeline = [ dict(type='RandomResizedCrop', size=224, scale=(0.2, 1.)), dict( type='RandomAppliedTrans', transforms=[ dict( type='ColorJitter', brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1) ], p=0.8), dict(type='RandomGrayscale', p=0.2), dict( type='RandomAppliedTrans', transforms=[ dict( type='GaussianBlur', sigma_min=0.1, sigma_max=2.0) ], p=0.5), dict(type='RandomHorizontalFlip'), dict(type='ToTensor'), dict(type='Normalize', *img_norm_cfg), ] data = dict( imgs_per_gpu=32, # total 328=256 workers_per_gpu=8, drop_last=True, train=dict( type=dataset_type, data_source=dict( list_file=data_train_list, root=data_train_root, **data_source_cfg), pipeline=train_pipeline))

optimizer

optimizer = dict(type='SGD', lr=0.3, weight_decay=0.0001, momentum=0.9)

learning policy

lr_config = dict(policy='CosineAnnealing', min_lr=0.) checkpoint_config = dict(interval=40)

runtime settings

total_epochs = 800

2021-03-25 09:31:12,871 - openselfsup - INFO - Set random seed to 0, deterministic: False 2021-03-25 09:31:21,655 - openselfsup - INFO - Start running, host: root@c354f9782387, work_dir: /home/codes/DenseCL/work_dirs/selfsup/densecl/densecl_coco_800ep 2021-03-25 09:31:21,655 - openselfsup - INFO - workflow: [('train', 1)], max: 800 epochs 2021-03-25 09:32:07,330 - openselfsup - INFO - Epoch [1][50/462] lr: 3.000e-01, eta: 3 days, 21:37:36, time: 0.912, data_time: 0.492, memory: 12516, loss_contra_single: 4.2904, loss_contra_dense: 4.3897, loss: 8.6800 2021-03-25 09:32:17,519 - openselfsup - INFO - Epoch [1][100/462] lr: 3.000e-01, eta: 2 days, 9:16:22, time: 0.204, data_time: 0.001, memory: 12516, loss_contra_single: 4.8449, loss_contra_dense: 4.7468, loss: 9.5917 2021-03-25 09:32:27,725 - openselfsup - INFO - Epoch [1][150/462] lr: 3.000e-01, eta: 1 day, 21:09:31, time: 0.204, data_time: 0.000, memory: 12516, loss_contra_single: 4.9559, loss_contra_dense: 4.7926, loss: 9.7485 2021-03-25 09:32:37,942 - openselfsup - INFO - Epoch [1][200/462] lr: 3.000e-01, eta: 1 day, 15:06:21, time: 0.204, data_time: 0.000, memory: 12516, loss_contra_single: 4.9797, loss_contra_dense: 4.7833, loss: 9.7630 2021-03-25 09:32:48,175 - openselfsup - INFO - Epoch [1][250/462] lr: 3.000e-01, eta: 1 day, 11:28:46, time: 0.205, data_time: 0.000, memory: 12516, loss_contra_single: 4.9750, loss_contra_dense: 4.7667, loss: 9.7417 2021-03-25 09:32:58,344 - openselfsup - INFO - Epoch [1][300/462] lr: 3.000e-01, eta: 1 day, 9:02:21, time: 0.203, data_time: 0.000, memory: 12516, loss_contra_single: 4.9372, loss_contra_dense: 4.7216, loss: 9.6588 2021-03-25 09:33:08,456 - openselfsup - INFO - Epoch [1][350/462] lr: 3.000e-01, eta: 1 day, 7:16:39, time: 0.202, data_time: 0.000, memory: 12516, loss_contra_single: 4.9337, loss_contra_dense: 4.7003, loss: 9.6340 2021-03-25 09:33:18,506 - openselfsup - INFO - Epoch [1][400/462] lr: 3.000e-01, eta: 1 day, 5:56:29, time: 0.201, data_time: 0.000, memory: 12516, loss_contra_single: 4.9463, loss_contra_dense: 4.6928, loss: 9.6391 2021-03-25 09:33:28,615 - openselfsup - INFO - Epoch [1][450/462] lr: 3.000e-01, eta: 1 day, 4:54:52, time: 0.202, data_time: 0.000, memory: 12516, loss_contra_single: 4.9608, loss_contra_dense: 4.6925, loss: 9.6534 2021-03-25 09:34:07,235 - openselfsup - INFO - Epoch [2][50/462] lr: 3.000e-01, eta: 1 day, 8:24:45, time: 0.700, data_time: 0.453, memory: 12516, loss_contra_single: 4.9793, loss_contra_dense: 4.6959, loss: 9.6752 2021-03-25 09:34:17,127 - openselfsup - INFO - Epoch [2][100/462] lr: 3.000e-01, eta: 1 day, 7:19:46, time: 0.198, data_time: 0.000, memory: 12516, loss_contra_single: 4.9893, loss_contra_dense: 4.6976, loss: 9.6869 2021-03-25 09:34:26,972 - openselfsup - INFO - Epoch [2][150/462] lr: 3.000e-01, eta: 1 day, 6:24:51, time: 0.197, data_time: 0.000, memory: 12516, loss_contra_single: 4.9949, loss_contra_dense: 4.7060, loss: 9.7009 2021-03-25 09:34:36,778 - openselfsup - INFO - Epoch [2][200/462] lr: 3.000e-01, eta: 1 day, 5:37:53, time: 0.196, data_time: 0.001, memory: 12516, loss_contra_single: 4.9986, loss_contra_dense: 4.7151, loss: 9.7137 2021-03-25 09:34:46,731 - openselfsup - INFO - Epoch [2][250/462] lr: 3.000e-01, eta: 1 day, 4:58:45, time: 0.199, data_time: 0.000, memory: 12516, loss_contra_single: 5.0037, loss_contra_dense: 4.7257, loss: 9.7293 2021-03-25 09:34:56,564 - openselfsup - INFO - Epoch [2][300/462] lr: 3.000e-01, eta: 1 day, 4:23:43, time: 0.197, data_time: 0.000, memory: 12516, loss_contra_single: 4.9995, loss_contra_dense: 4.7437, loss: 9.7433 2021-03-25 09:35:06,499 - openselfsup - INFO - Epoch [2][350/462] lr: 3.000e-01, eta: 1 day, 3:53:44, time: 0.199, data_time: 0.001, memory: 12516, loss_contra_single: 4.9955, loss_contra_dense: 4.7586, loss: 9.7541 2021-03-25 09:35:16,331 - openselfsup - INFO - Epoch [2][400/462] lr: 3.000e-01, eta: 1 day, 3:26:37, time: 0.197, data_time: 0.001, memory: 12516, loss_contra_single: 4.9901, loss_contra_dense: 4.7736, loss: 9.7637 2021-03-25 09:35:26,245 - openselfsup - INFO - Epoch [2][450/462] lr: 3.000e-01, eta: 1 day, 3:02:55, time: 0.198, data_time: 0.000, memory: 12516, loss_contra_single: 4.9849, loss_contra_dense: 4.7901, loss: 9.7750 2021-03-25 09:36:05,476 - openselfsup - INFO - Epoch [3][50/462] lr: 3.000e-01, eta: 1 day, 5:02:47, time: 0.708, data_time: 0.487, memory: 12516, loss_contra_single: 4.9844, loss_contra_dense: 4.8135, loss: 9.7979 2021-03-25 09:36:15,384 - openselfsup - INFO - Epoch [3][100/462] lr: 3.000e-01, eta: 1 day, 4:36:54, time: 0.198, data_time: 0.000, memory: 12516, loss_contra_single: 4.9781, loss_contra_dense: 4.8317, loss: 9.8098 2021-03-25 09:36:25,287 - openselfsup - INFO - Epoch [3][150/462] lr: 3.000e-01, eta: 1 day, 4:13:22, time: 0.198, data_time: 0.000, memory: 12516, loss_contra_single: 4.9746, loss_contra_dense: 4.8472, loss: 9.8218 2021-03-25 09:36:35,163 - openselfsup - INFO - Epoch [3][200/462] lr: 3.000e-01, eta: 1 day, 3:51:45, time: 0.197, data_time: 0.000, memory: 12516, loss_contra_single: 4.9696, loss_contra_dense: 4.8614, loss: 9.8310 ......... 2021-03-26 12:21:25,422 - openselfsup - INFO - Epoch [794][50/462] lr: 5.667e-05, eta: 0:13:32, time: 1.546, data_time: 1.022, memory: 12516, loss_contra_single: 3.3514, loss_contra_dense: 3.3958, loss: 6.7473 2021-03-26 12:21:52,574 - openselfsup - INFO - Epoch [794][100/462] lr: 5.667e-05, eta: 0:13:19, time: 0.543, data_time: 0.001, memory: 12516, loss_contra_single: 3.3540, loss_contra_dense: 3.3993, loss: 6.7533 2021-03-26 12:22:19,797 - openselfsup - INFO - Epoch [794][150/462] lr: 5.667e-05, eta: 0:13:07, time: 0.544, data_time: 0.001, memory: 12516, loss_contra_single: 3.3488, loss_contra_dense: 3.3952, loss: 6.7440 2021-03-26 12:22:47,042 - openselfsup - INFO - Epoch [794][200/462] lr: 5.667e-05, eta: 0:12:54, time: 0.545, data_time: 0.002, memory: 12516, loss_contra_single: 3.3503, loss_contra_dense: 3.3952, loss: 6.7456 2021-03-26 12:23:14,157 - openselfsup - INFO - Epoch [794][250/462] lr: 5.667e-05, eta: 0:12:41, time: 0.543, data_time: 0.001, memory: 12516, loss_contra_single: 3.3494, loss_contra_dense: 3.3933, loss: 6.7428 2021-03-26 12:23:41,462 - openselfsup - INFO - Epoch [794][300/462] lr: 5.667e-05, eta: 0:12:29, time: 0.546, data_time: 0.001, memory: 12516, loss_contra_single: 3.3535, loss_contra_dense: 3.3990, loss: 6.7526 2021-03-26 12:24:08,767 - openselfsup - INFO - Epoch [794][350/462] lr: 5.667e-05, eta: 0:12:16, time: 0.546, data_time: 0.001, memory: 12516, loss_contra_single: 3.3507, loss_contra_dense: 3.3964, loss: 6.7471 2021-03-26 12:24:35,944 - openselfsup - INFO - Epoch [794][400/462] lr: 5.667e-05, eta: 0:12:03, time: 0.544, data_time: 0.001, memory: 12516, loss_contra_single: 3.3532, loss_contra_dense: 3.3980, loss: 6.7512 2021-03-26 12:25:01,908 - openselfsup - INFO - Epoch [794][450/462] lr: 5.667e-05, eta: 0:11:51, time: 0.520, data_time: 0.001, memory: 12516, loss_contra_single: 3.3498, loss_contra_dense: 3.3968, loss: 6.7467 2021-03-26 12:26:21,693 - openselfsup - INFO - Epoch [795][50/462] lr: 4.164e-05, eta: 0:11:35, time: 1.507, data_time: 0.929, memory: 12516, loss_contra_single: 3.3507, loss_contra_dense: 3.3954, loss: 6.7461 2021-03-26 12:26:48,843 - openselfsup - INFO - Epoch [795][100/462] lr: 4.164e-05, eta: 0:11:23, time: 0.543, data_time: 0.001, memory: 12516, loss_contra_single: 3.3529, loss_contra_dense: 3.3998, loss: 6.7527 2021-03-26 12:27:16,140 - openselfsup - INFO - Epoch [795][150/462] lr: 4.164e-05, eta: 0:11:10, time: 0.546, data_time: 0.001, memory: 12516, loss_contra_single: 3.3472, loss_contra_dense: 3.3894, loss: 6.7365 2021-03-26 12:27:43,551 - openselfsup - INFO - Epoch [795][200/462] lr: 4.164e-05, eta: 0:10:57, time: 0.548, data_time: 0.001, memory: 12516, loss_contra_single: 3.3509, loss_contra_dense: 3.3920, loss: 6.7429 2021-03-26 12:28:10,263 - openselfsup - INFO - Epoch [795][250/462] lr: 4.164e-05, eta: 0:10:45, time: 0.534, data_time: 0.001, memory: 12516, loss_contra_single: 3.3505, loss_contra_dense: 3.3980, loss: 6.7485 2021-03-26 12:28:37,463 - openselfsup - INFO - Epoch [795][300/462] lr: 4.164e-05, eta: 0:10:32, time: 0.544, data_time: 0.002, memory: 12516, loss_contra_single: 3.3510, loss_contra_dense: 3.3978, loss: 6.7489 2021-03-26 12:29:03,851 - openselfsup - INFO - Epoch [795][350/462] lr: 4.164e-05, eta: 0:10:19, time: 0.529, data_time: 0.001, memory: 12516, loss_contra_single: 3.3467, loss_contra_dense: 3.3945, loss: 6.7412 2021-03-26 12:29:15,641 - openselfsup - INFO - Epoch [795][400/462] lr: 4.164e-05, eta: 0:10:06, time: 0.234, data_time: 0.000, memory: 12516, loss_contra_single: 3.3486, loss_contra_dense: 3.3935, loss: 6.7421 2021-03-26 12:29:41,146 - openselfsup - INFO - Epoch [795][450/462] lr: 4.164e-05, eta: 0:09:54, time: 0.511, data_time: 0.003, memory: 12516, loss_contra_single: 3.3532, loss_contra_dense: 3.3981, loss: 6.7513 2021-03-26 12:30:49,824 - openselfsup - INFO - Epoch [796][50/462] lr: 2.891e-05, eta: 0:09:38, time: 1.202, data_time: 0.589, memory: 12516, loss_contra_single: 3.3496, loss_contra_dense: 3.3921, loss: 6.7417 2021-03-26 12:31:17,298 - openselfsup - INFO - Epoch [796][100/462] lr: 2.891e-05, eta: 0:09:25, time: 0.549, data_time: 0.001, memory: 12516, loss_contra_single: 3.3510, loss_contra_dense: 3.3962, loss: 6.7472 2021-03-26 12:31:44,261 - openselfsup - INFO - Epoch [796][150/462] lr: 2.891e-05, eta: 0:09:13, time: 0.540, data_time: 0.002, memory: 12516, loss_contra_single: 3.3518, loss_contra_dense: 3.3961, loss: 6.7479 2021-03-26 12:32:11,650 - openselfsup - INFO - Epoch [796][200/462] lr: 2.891e-05, eta: 0:09:00, time: 0.548, data_time: 0.001, memory: 12516, loss_contra_single: 3.3503, loss_contra_dense: 3.3957, loss: 6.7461 2021-03-26 12:32:38,803 - openselfsup - INFO - Epoch [796][250/462] lr: 2.891e-05, eta: 0:08:47, time: 0.543, data_time: 0.001, memory: 12516, loss_contra_single: 3.3511, loss_contra_dense: 3.3947, loss: 6.7458 2021-03-26 12:32:52,589 - openselfsup - INFO - Epoch [796][300/462] lr: 2.891e-05, eta: 0:08:34, time: 0.277, data_time: 0.001, memory: 12516, loss_contra_single: 3.3544, loss_contra_dense: 3.4028, loss: 6.7572 2021-03-26 12:33:06,576 - openselfsup - INFO - Epoch [796][350/462] lr: 2.891e-05, eta: 0:08:22, time: 0.279, data_time: 0.000, memory: 12516, loss_contra_single: 3.3506, loss_contra_dense: 3.3926, loss: 6.7432 2021-03-26 12:33:33,965 - openselfsup - INFO - Epoch [796][400/462] lr: 2.891e-05, eta: 0:08:09, time: 0.548, data_time: 0.001, memory: 12516, loss_contra_single: 3.3493, loss_contra_dense: 3.3944, loss: 6.7437 2021-03-26 12:34:01,336 - openselfsup - INFO - Epoch [796][450/462] lr: 2.891e-05, eta: 0:07:56, time: 0.547, data_time: 0.001, memory: 12516, loss_contra_single: 3.3507, loss_contra_dense: 3.3954, loss: 6.7460 2021-03-26 12:35:08,833 - openselfsup - INFO - Epoch [797][50/462] lr: 1.851e-05, eta: 0:07:40, time: 1.176, data_time: 0.548, memory: 12516, loss_contra_single: 3.3529, loss_contra_dense: 3.3998, loss: 6.7527 2021-03-26 12:35:36,459 - openselfsup - INFO - Epoch [797][100/462] lr: 1.851e-05, eta: 0:07:28, time: 0.552, data_time: 0.001, memory: 12516, loss_contra_single: 3.3521, loss_contra_dense: 3.3962, loss: 6.7483 2021-03-26 12:36:03,984 - openselfsup - INFO - Epoch [797][150/462] lr: 1.851e-05, eta: 0:07:15, time: 0.551, data_time: 0.001, memory: 12516, loss_contra_single: 3.3460, loss_contra_dense: 3.3921, loss: 6.7380 2021-03-26 12:36:28,534 - openselfsup - INFO - Epoch [797][200/462] lr: 1.851e-05, eta: 0:07:02, time: 0.492, data_time: 0.001, memory: 12516, loss_contra_single: 3.3482, loss_contra_dense: 3.3928, loss: 6.7410 2021-03-26 12:36:43,668 - openselfsup - INFO - Epoch [797][250/462] lr: 1.851e-05, eta: 0:06:49, time: 0.300, data_time: 0.001, memory: 12516, loss_contra_single: 3.3477, loss_contra_dense: 3.3957, loss: 6.7434 2021-03-26 12:37:10,218 - openselfsup - INFO - Epoch [797][300/462] lr: 1.851e-05, eta: 0:06:37, time: 0.532, data_time: 0.003, memory: 12516, loss_contra_single: 3.3495, loss_contra_dense: 3.3969, loss: 6.7464 2021-03-26 12:37:37,531 - openselfsup - INFO - Epoch [797][350/462] lr: 1.851e-05, eta: 0:06:24, time: 0.546, data_time: 0.001, memory: 12516, loss_contra_single: 3.3499, loss_contra_dense: 3.3961, loss: 6.7461 2021-03-26 12:38:05,004 - openselfsup - INFO - Epoch [797][400/462] lr: 1.851e-05, eta: 0:06:11, time: 0.549, data_time: 0.001, memory: 12516, loss_contra_single: 3.3506, loss_contra_dense: 3.3961, loss: 6.7467 2021-03-26 12:38:32,088 - openselfsup - INFO - Epoch [797][450/462] lr: 1.851e-05, eta: 0:05:58, time: 0.542, data_time: 0.001, memory: 12516, loss_contra_single: 3.3519, loss_contra_dense: 3.3956, loss: 6.7475 2021-03-26 12:39:39,993 - openselfsup - INFO - Epoch [798][50/462] lr: 1.041e-05, eta: 0:05:43, time: 1.184, data_time: 0.589, memory: 12516, loss_contra_single: 3.3476, loss_contra_dense: 3.3926, loss: 6.7402 2021-03-26 12:40:06,950 - openselfsup - INFO - Epoch [798][100/462] lr: 1.041e-05, eta: 0:05:30, time: 0.539, data_time: 0.001, memory: 12516, loss_contra_single: 3.3554, loss_contra_dense: 3.4029, loss: 6.7583 2021-03-26 12:40:18,646 - openselfsup - INFO - Epoch [798][150/462] lr: 1.041e-05, eta: 0:05:17, time: 0.235, data_time: 0.001, memory: 12516, loss_contra_single: 3.3486, loss_contra_dense: 3.3930, loss: 6.7416 2021-03-26 12:40:39,766 - openselfsup - INFO - Epoch [798][200/462] lr: 1.041e-05, eta: 0:05:04, time: 0.422, data_time: 0.000, memory: 12516, loss_contra_single: 3.3549, loss_contra_dense: 3.3996, loss: 6.7545 2021-03-26 12:41:07,195 - openselfsup - INFO - Epoch [798][250/462] lr: 1.041e-05, eta: 0:04:51, time: 0.549, data_time: 0.001, memory: 12516, loss_contra_single: 3.3471, loss_contra_dense: 3.3931, loss: 6.7402 2021-03-26 12:41:34,295 - openselfsup - INFO - Epoch [798][300/462] lr: 1.041e-05, eta: 0:04:38, time: 0.542, data_time: 0.001, memory: 12516, loss_contra_single: 3.3496, loss_contra_dense: 3.3983, loss: 6.7478 2021-03-26 12:42:01,648 - openselfsup - INFO - Epoch [798][350/462] lr: 1.041e-05, eta: 0:04:26, time: 0.547, data_time: 0.001, memory: 12516, loss_contra_single: 3.3475, loss_contra_dense: 3.3934, loss: 6.7410 2021-03-26 12:42:28,720 - openselfsup - INFO - Epoch [798][400/462] lr: 1.041e-05, eta: 0:04:13, time: 0.542, data_time: 0.001, memory: 12516, loss_contra_single: 3.3493, loss_contra_dense: 3.3954, loss: 6.7447 2021-03-26 12:42:56,082 - openselfsup - INFO - Epoch [798][450/462] lr: 1.041e-05, eta: 0:04:00, time: 0.547, data_time: 0.001, memory: 12516, loss_contra_single: 3.3506, loss_contra_dense: 3.3960, loss: 6.7466 2021-03-26 12:43:56,636 - openselfsup - INFO - Epoch [799][50/462] lr: 4.626e-06, eta: 0:03:44, time: 1.040, data_time: 0.621, memory: 12516, loss_contra_single: 3.3519, loss_contra_dense: 3.3987, loss: 6.7506 2021-03-26 12:44:09,563 - openselfsup - INFO - Epoch [799][100/462] lr: 4.626e-06, eta: 0:03:31, time: 0.259, data_time: 0.001, memory: 12516, loss_contra_single: 3.3519, loss_contra_dense: 3.3982, loss: 6.7501 2021-03-26 12:44:36,807 - openselfsup - INFO - Epoch [799][150/462] lr: 4.626e-06, eta: 0:03:19, time: 0.544, data_time: 0.001, memory: 12516, loss_contra_single: 3.3463, loss_contra_dense: 3.3912, loss: 6.7374 2021-03-26 12:45:04,153 - openselfsup - INFO - Epoch [799][200/462] lr: 4.626e-06, eta: 0:03:06, time: 0.547, data_time: 0.001, memory: 12516, loss_contra_single: 3.3494, loss_contra_dense: 3.3964, loss: 6.7458 2021-03-26 12:45:31,267 - openselfsup - INFO - Epoch [799][250/462] lr: 4.626e-06, eta: 0:02:53, time: 0.542, data_time: 0.001, memory: 12516, loss_contra_single: 3.3496, loss_contra_dense: 3.3959, loss: 6.7455 2021-03-26 12:45:58,880 - openselfsup - INFO - Epoch [799][300/462] lr: 4.626e-06, eta: 0:02:40, time: 0.552, data_time: 0.002, memory: 12516, loss_contra_single: 3.3501, loss_contra_dense: 3.3959, loss: 6.7460 2021-03-26 12:46:26,155 - openselfsup - INFO - Epoch [799][350/462] lr: 4.626e-06, eta: 0:02:27, time: 0.545, data_time: 0.001, memory: 12516, loss_contra_single: 3.3497, loss_contra_dense: 3.3944, loss: 6.7441 2021-03-26 12:46:53,468 - openselfsup - INFO - Epoch [799][400/462] lr: 4.626e-06, eta: 0:02:14, time: 0.546, data_time: 0.001, memory: 12516, loss_contra_single: 3.3532, loss_contra_dense: 3.3977, loss: 6.7509 2021-03-26 12:47:20,703 - openselfsup - INFO - Epoch [799][450/462] lr: 4.626e-06, eta: 0:02:01, time: 0.545, data_time: 0.001, memory: 12516, loss_contra_single: 3.3537, loss_contra_dense: 3.3994, loss: 6.7531 2021-03-26 12:48:48,682 - openselfsup - INFO - Epoch [800][50/462] lr: 1.157e-06, eta: 0:01:46, time: 1.582, data_time: 0.965, memory: 12516, loss_contra_single: 3.3498, loss_contra_dense: 3.3974, loss: 6.7472 2021-03-26 12:49:16,110 - openselfsup - INFO - Epoch [800][100/462] lr: 1.157e-06, eta: 0:01:33, time: 0.549, data_time: 0.001, memory: 12516, loss_contra_single: 3.3481, loss_contra_dense: 3.3978, loss: 6.7460 2021-03-26 12:49:42,993 - openselfsup - INFO - Epoch [800][150/462] lr: 1.157e-06, eta: 0:01:20, time: 0.538, data_time: 0.001, memory: 12516, loss_contra_single: 3.3473, loss_contra_dense: 3.3921, loss: 6.7394 2021-03-26 12:50:10,210 - openselfsup - INFO - Epoch [800][200/462] lr: 1.157e-06, eta: 0:01:07, time: 0.544, data_time: 0.001, memory: 12516, loss_contra_single: 3.3516, loss_contra_dense: 3.3967, loss: 6.7482 2021-03-26 12:50:37,746 - openselfsup - INFO - Epoch [800][250/462] lr: 1.157e-06, eta: 0:00:54, time: 0.551, data_time: 0.001, memory: 12516, loss_contra_single: 3.3483, loss_contra_dense: 3.3919, loss: 6.7402 2021-03-26 12:51:05,142 - openselfsup - INFO - Epoch [800][300/462] lr: 1.157e-06, eta: 0:00:41, time: 0.548, data_time: 0.001, memory: 12516, loss_contra_single: 3.3506, loss_contra_dense: 3.3970, loss: 6.7477 2021-03-26 12:51:32,655 - openselfsup - INFO - Epoch [800][350/462] lr: 1.157e-06, eta: 0:00:28, time: 0.550, data_time: 0.001, memory: 12516, loss_contra_single: 3.3494, loss_contra_dense: 3.3969, loss: 6.7463 2021-03-26 12:51:59,865 - openselfsup - INFO - Epoch [800][400/462] lr: 1.157e-06, eta: 0:00:15, time: 0.544, data_time: 0.001, memory: 12516, loss_contra_single: 3.3500, loss_contra_dense: 3.3958, loss: 6.7458 2021-03-26 12:52:25,972 - openselfsup - INFO - Epoch [800][450/462] lr: 1.157e-06, eta: 0:00:03, time: 0.523, data_time: 0.001, memory: 12516, loss_contra_single: 3.3513, loss_contra_dense: 3.3977, loss: 6.7491 2021-03-26 12:52:29,731 - openselfsup - INFO - Saving checkpoint at 800 epochs

zzzzzz0407 commented 3 years ago

@jancylee Could you please upload the model that you pretrained to google cloud / baidu cloud? We will check it for you.

jancylee commented 3 years ago

链接:https://pan.baidu.com/s/1tUUzN7UPPKfOoSKHhIC1Bw 提取码:5zhc

WXinlong commented 3 years ago

@jancylee Have you extracted the backbone weights using tools/extract_backbone_weights.py before fine-tuning object detection? If so, please upload the converted model.

jancylee commented 3 years ago

https://pan.baidu.com/s/1M8DOOsc3Yg_loJPxu-75Mw 4zj3

zzzzzz0407 commented 3 years ago

https://pan.baidu.com/s/1M8DOOsc3Yg_loJPxu-75Mw 4zj3

@jancylee We have tested the model that you train and there is no problem.

image

Please make sure you have followed the instructions in the readme: You have to 1) extract the backbone weights (https://github.com/WXinlong/DenseCL#extracting-backbone-weights ) and 2) convert it to detectron2 format (https://github.com/WXinlong/DenseCL/blob/main/benchmarks/detection/README.md).

jancylee commented 3 years ago

Thank you very much. It was the "extract the backbone weights" worked.