Closed HongQWang closed 4 years ago
Hello, I'm sorry I have no plan to upload log files.
I cannot judge whether it is a hyperparameter issue, an environmental construction issue, or a bug. Which config is your "default"? Which dataset? Is it publicly available? Do the losses decrease?
Hello, I'm sorry I have no plan to upload log files.
I cannot judge whether it is a hyperparameter issue, an environmental construction issue, or a bug. Which config is your "default"? Which dataset? Is it publicly available? Do the losses decrease?
Thanks for your reply!
Config :
universenet50_gfl_fp16_4x4_mstrain_480_960_2x_coco.py
and
universenet101_gfl_fp16_4x4_mstrain_480_960_2x_coco.py
I used the two config files, and get the same result.
Dataset:
my custom dataset. and the dataset is available here(http://challenge.xfyun.cn/topic/info?type=Xray)
Loss:
when the code running. loss_cls increase , and the loss_bbox 、loss_dfl decrease。
If I should adjust the weight of loss_cls ?
Hello, I'm sorry I have no plan to upload log files.
I cannot judge whether it is a hyperparameter issue, an environmental construction issue, or a bug. Which config is your "default"? Which dataset? Is it publicly available? Do the losses decrease?
my log :
2020-08-16 02:36:45,412 - mmdet - INFO - Start running, host: jovyan@pengbo-featurize, work_dir: /home/jovyan/work/UniverseNet-2/work_dirs/universenet101_gfl_fp16_4x4_mstrain_480_960_2x_coco 2020-08-16 02:36:45,413 - mmdet - INFO - workflow: [('train', 1)], max: 24 epochs 2020-08-16 02:39:04,790 - mmdet - INFO - Epoch [1][50/721] lr: 1.978e-03, eta: 13:19:55, time: 2.782, data_time: 0.066, memory: 8486, loss_cls: 0.4804, loss_bbox: 1.2206, loss_dfl: 0.6125, loss: 2.3134 2020-08-16 02:41:24,773 - mmdet - INFO - Epoch [1][100/721] lr: 3.976e-03, eta: 13:20:10, time: 2.800, data_time: 0.021, memory: 8486, loss_cls: 2.5050, loss_bbox: 1.1613, loss_dfl: 0.6069, loss: 4.2732 2020-08-16 02:43:42,429 - mmdet - INFO - Epoch [1][150/721] lr: 5.974e-03, eta: 13:14:16, time: 2.753, data_time: 0.021, memory: 8486, loss_cls: 4.9992, loss_bbox: 1.1458, loss_dfl: 0.5685, loss: 6.7135 2020-08-16 02:45:58,220 - mmdet - INFO - Epoch [1][200/721] lr: 7.972e-03, eta: 13:07:31, time: 2.716, data_time: 0.021, memory: 8486, loss_cls: 4.7035, loss_bbox: 1.1029, loss_dfl: 0.5529, loss: 6.3594 2020-08-16 02:48:16,862 - mmdet - INFO - Epoch [1][250/721] lr: 9.970e-03, eta: 13:05:47, time: 2.773, data_time: 0.044, memory: 8486, loss_cls: 1.9541, loss_bbox: 1.1172, loss_dfl: 0.5503, loss: 3.6216 2020-08-16 02:50:35,632 - mmdet - INFO - Epoch [1][300/721] lr: 1.197e-02, eta: 13:04:00, time: 2.775, data_time: 0.027, memory: 8486, loss_cls: 0.9678, loss_bbox: 1.1054, loss_dfl: 0.5353, loss: 2.6085 2020-08-16 02:52:51,349 - mmdet - INFO - Epoch [1][350/721] lr: 1.397e-02, eta: 12:59:35, time: 2.714, data_time: 0.021, memory: 8486, loss_cls: 0.6308, loss_bbox: 1.1047, loss_dfl: 0.5351, loss: 2.2707 2020-08-16 02:55:03,844 - mmdet - INFO - Epoch [1][400/721] lr: 1.596e-02, eta: 12:53:27, time: 2.650, data_time: 0.021, memory: 8486, loss_cls: 0.6338, loss_bbox: 1.1209, loss_dfl: 0.5433, loss: 2.2980 2020-08-16 02:57:15,483 - mmdet - INFO - Epoch [1][450/721] lr: 1.796e-02, eta: 12:47:39, time: 2.633, data_time: 0.016, memory: 8486, loss_cls: 0.7886, loss_bbox: 1.0957, loss_dfl: 0.5380, loss: 2.4223 2020-08-16 02:59:17,598 - mmdet - INFO - Epoch [1][500/721] lr: 1.996e-02, eta: 12:37:14, time: 2.442, data_time: 0.015, memory: 8486, loss_cls: 0.5764, loss_bbox: 1.1322, loss_dfl: 0.5417, loss: 2.2503 2020-08-16 03:01:23,953 - mmdet - INFO - Epoch [1][550/721] lr: 2.000e-02, eta: 12:30:29, time: 2.527, data_time: 0.016, memory: 8486, loss_cls: 0.5869, loss_bbox: 1.1098, loss_dfl: 0.5344, loss: 2.2310 2020-08-16 03:03:43,807 - mmdet - INFO - Epoch [1][600/721] lr: 2.000e-02, eta: 12:30:47, time: 2.797, data_time: 0.037, memory: 8486, loss_cls: 0.5879, loss_bbox: 1.0923, loss_dfl: 0.5316, loss: 2.2118 2020-08-16 03:05:55,836 - mmdet - INFO - Epoch [1][650/721] lr: 2.000e-02, eta: 12:27:20, time: 2.641, data_time: 0.022, memory: 8486, loss_cls: 0.5653, loss_bbox: 1.1125, loss_dfl: 0.5354, loss: 2.2132 2020-08-16 03:08:14,097 - mmdet - INFO - Epoch [1][700/721] lr: 2.000e-02, eta: 12:26:32, time: 2.765, data_time: 0.069, memory: 8486, loss_cls: 0.5395, loss_bbox: 1.1534, loss_dfl: 0.5461, loss: 2.2390 2020-08-16 03:09:09,262 - mmdet - INFO - Saving checkpoint at 1 epochs 2020-08-16 03:11:59,823 - mmdet - INFO - Evaluating bbox... 2020-08-16 03:11:59,824 - mmdet - ERROR - The testing results of the whole dataset is empty. 2020-08-16 03:11:59,826 - mmdet - INFO - Epoch [1][721/721] lr: 2.000e-02, 2020-08-16 03:14:15,398 - mmdet - INFO - Epoch [2][50/721] lr: 2.000e-02, eta: 12:03:18, time: 2.709, data_time: 0.063, memory: 8486, loss_cls: 0.5958, loss_bbox: 1.1072, loss_dfl: 0.5379, loss: 2.2409 2020-08-16 03:16:23,149 - mmdet - INFO - Epoch [2][100/721] lr: 2.000e-02, eta: 11:59:57, time: 2.555, data_time: 0.018, memory: 8486, loss_cls: 0.5905, loss_bbox: 1.0831, loss_dfl: 0.5257, loss: 2.1993 2020-08-16 03:18:31,963 - mmdet - INFO - Epoch [2][150/721] lr: 2.000e-02, eta: 11:57:04, time: 2.576, data_time: 0.015, memory: 8486, loss_cls: 0.5414, loss_bbox: 1.1377, loss_dfl: 0.5413, loss: 2.2204 2020-08-16 03:20:49,284 - mmdet - INFO - Epoch [2][200/721] lr: 2.000e-02, eta: 11:56:47, time: 2.746, data_time: 0.075, memory: 8486, loss_cls: 0.5736, loss_bbox: 1.1272, loss_dfl: 0.5407, loss: 2.2415 2020-08-16 03:23:01,610 - mmdet - INFO - Epoch [2][250/721] lr: 2.000e-02, eta: 11:54:54, time: 2.647, data_time: 0.024, memory: 8486, loss_cls: 0.5640, loss_bbox: 1.1045, loss_dfl: 0.5347, loss: 2.2031 2020-08-16 03:25:15,508 - mmdet - INFO - Epoch [2][300/721] lr: 2.000e-02, eta: 11:53:24, time: 2.678, data_time: 0.022, memory: 8486, loss_cls: 0.5933, loss_bbox: 1.0871, loss_dfl: 0.5301, loss: 2.2104 2020-08-16 03:27:33,835 - mmdet - INFO - Epoch [2][350/721] lr: 2.000e-02, eta: 11:52:57, time: 2.767, data_time: 0.028, memory: 8486, loss_cls: 0.6038, loss_bbox: 1.0547, loss_dfl: 0.5212, loss: 2.1797 2020-08-16 03:29:52,575 - mmdet - INFO - Epoch [2][400/721] lr: 2.000e-02, eta: 11:52:26, time: 2.775, data_time: 0.029, memory: 8486, loss_cls: 0.5547, loss_bbox: 1.1185, loss_dfl: 0.5454, loss: 2.2186 2020-08-16 03:32:05,735 - mmdet - INFO - Epoch [2][450/721] lr: 2.000e-02, eta: 11:50:28, time: 2.663, data_time: 0.021, memory: 8486, loss_cls: 0.5662, loss_bbox: 1.0916, loss_dfl: 0.5326, loss: 2.1904 2020-08-16 03:34:16,526 - mmdet - INFO - Epoch [2][500/721] lr: 2.000e-02, eta: 11:47:59, time: 2.616, data_time: 0.016, memory: 8486, loss_cls: 0.5887, loss_bbox: 1.0942, loss_dfl: 0.5272, loss: 2.2101 2020-08-16 03:36:26,024 - mmdet - INFO - Epoch [2][550/721] lr: 2.000e-02, eta: 11:45:14, time: 2.590, data_time: 0.016, memory: 8486, loss_cls: 0.5729, loss_bbox: 1.1025, loss_dfl: 0.5350, loss: 2.2104 2020-08-16 03:38:37,435 - mmdet - INFO - Epoch [2][600/721] lr: 2.000e-02, eta: 11:42:56, time: 2.628, data_time: 0.016, memory: 8486, loss_cls: 0.5533, loss_bbox: 1.1195, loss_dfl: 0.5363, loss: 2.2090 2020-08-16 03:40:47,550 - mmdet - INFO - Epoch [2][650/721] lr: 2.000e-02, eta: 11:40:23, time: 2.602, data_time: 0.018, memory: 8486, loss_cls: 0.5572, loss_bbox: 1.1066, loss_dfl: 0.5270, loss: 2.1908 2020-08-16 03:43:00,184 - mmdet - INFO - Epoch [2][700/721] lr: 2.000e-02, eta: 11:38:19, time: 2.653, data_time: 0.016, memory: 8486, loss_cls: 0.5712, loss_bbox: 1.0934, loss_dfl: 0.5250, loss: 2.1896 2020-08-16 03:43:54,813 - mmdet - INFO - Saving checkpoint at 2 epochs 2020-08-16 03:46:47,339 - mmdet - INFO - Evaluating bbox... 2020-08-16 03:46:49,316 - mmdet - INFO - Epoch [2][721/721] lr: 2.000e-02, bbox_mAP: 0.0000, bbox_mAP_50: 0.0000, bbox_mAP_75: 0.0000, bbox_mAP_s: 0.0000, bbox_mAP_m: 0.0000, bbox_mAP_l: 0.0000, bbox_mAP_copypaste: 0.000 0.000 0.000 0.000 0.000 0.000
The loss_cls should not increase so much during warmup.
You can try:
optimizer_config = dict(grad_clip=None)
frozen_stages=-1
(This is not limited for UniverseNet but might be helpful as the dataset images are not normal RGB images.)universenet50_gfl_fp16_4x4_mstrain_480_960_1x_coco
Epoch [1][50/7330] lr: 4.995e-04, eta: 2 days, 22:00:28, time: 2.867, data_time: 0.876, memory: 5297, loss_cls: 0.1191, loss_bbox: 1.5794, loss_dfl: 0.7078, loss: 2.4063
Epoch [1][100/7330] lr: 9.990e-04, eta: 2 days, 10:51:19, time: 1.956, data_time: 0.014, memory: 5297, loss_cls: 0.1377, loss_bbox: 1.5557, loss_dfl: 0.7010, loss: 2.3945
Epoch [1][150/7330] lr: 1.499e-03, eta: 2 days, 7:32:23, time: 2.008, data_time: 0.014, memory: 5297, loss_cls: 0.5083, loss_bbox: 1.1669, loss_dfl: 0.5600, loss: 2.2352
Epoch [1][200/7330] lr: 1.998e-03, eta: 2 days, 5:54:25, time: 2.014, data_time: 0.014, memory: 5297, loss_cls: 0.4883, loss_bbox: 1.1360, loss_dfl: 0.5292, loss: 2.1534
Epoch [1][250/7330] lr: 2.498e-03, eta: 2 days, 5:03:13, time: 2.042, data_time: 0.014, memory: 5297, loss_cls: 0.5261, loss_bbox: 1.1581, loss_dfl: 0.5410, loss: 2.2253
Epoch [1][300/7330] lr: 2.997e-03, eta: 2 days, 4:33:49, time: 2.064, data_time: 0.015, memory: 5297, loss_cls: 0.5366, loss_bbox: 1.1172, loss_dfl: 0.5302, loss: 2.1841
Epoch [1][350/7330] lr: 3.497e-03, eta: 2 days, 4:03:46, time: 2.023, data_time: 0.013, memory: 5297, loss_cls: 0.5046, loss_bbox: 1.1366, loss_dfl: 0.5229, loss: 2.1642
Epoch [1][400/7330] lr: 3.996e-03, eta: 2 days, 3:49:07, time: 2.069, data_time: 0.014, memory: 5297, loss_cls: 0.5318, loss_bbox: 0.9853, loss_dfl: 0.4835, loss: 2.0005
Epoch [1][450/7330] lr: 4.496e-03, eta: 2 days, 3:37:41, time: 2.071, data_time: 0.014, memory: 5297, loss_cls: 0.5575, loss_bbox: 0.9311, loss_dfl: 0.4663, loss: 1.9550
Epoch [1][500/7330] lr: 4.995e-03, eta: 2 days, 3:28:35, time: 2.074, data_time: 0.014, memory: 5297, loss_cls: 0.5968, loss_bbox: 0.8629, loss_dfl: 0.4443, loss: 1.9040
Epoch [1][550/7330] lr: 5.495e-03, eta: 2 days, 3:21:00, time: 2.075, data_time: 0.014, memory: 5297, loss_cls: 0.6575, loss_bbox: 0.8551, loss_dfl: 0.4443, loss: 1.9568
Epoch [1][600/7330] lr: 5.994e-03, eta: 2 days, 3:14:52, time: 2.079, data_time: 0.014, memory: 5297, loss_cls: 0.6458, loss_bbox: 0.9685, loss_dfl: 0.4850, loss: 2.0994
Epoch [1][650/7330] lr: 6.494e-03, eta: 2 days, 3:13:58, time: 2.120, data_time: 0.014, memory: 5297, loss_cls: 0.6360, loss_bbox: 0.9742, loss_dfl: 0.4748, loss: 2.0851
Epoch [1][700/7330] lr: 6.993e-03, eta: 2 days, 3:09:55, time: 2.090, data_time: 0.014, memory: 5297, loss_cls: 0.6726, loss_bbox: 1.0340, loss_dfl: 0.5035, loss: 2.2101
Epoch [1][750/7330] lr: 7.493e-03, eta: 2 days, 3:07:42, time: 2.106, data_time: 0.015, memory: 5297, loss_cls: 0.6280, loss_bbox: 0.8751, loss_dfl: 0.4532, loss: 1.9563
Epoch [1][800/7330] lr: 7.992e-03, eta: 2 days, 3:07:35, time: 2.129, data_time: 0.014, memory: 5297, loss_cls: 0.6376, loss_bbox: 0.8177, loss_dfl: 0.4237, loss: 1.8790
Epoch [1][850/7330] lr: 8.492e-03, eta: 2 days, 3:06:31, time: 2.120, data_time: 0.014, memory: 5297, loss_cls: 0.6420, loss_bbox: 0.7928, loss_dfl: 0.4158, loss: 1.8506
Epoch [1][900/7330] lr: 8.991e-03, eta: 2 days, 3:06:12, time: 2.130, data_time: 0.014, memory: 5297, loss_cls: 0.6576, loss_bbox: 0.7603, loss_dfl: 0.4021, loss: 1.8201
Epoch [1][950/7330] lr: 9.491e-03, eta: 2 days, 3:05:50, time: 2.132, data_time: 0.014, memory: 5297, loss_cls: 0.6863, loss_bbox: 0.7815, loss_dfl: 0.4104, loss: 1.8781
Epoch [1][1000/7330] lr: 9.990e-03, eta: 2 days, 3:05:37, time: 2.135, data_time: 0.014, memory: 5297, loss_cls: 0.7394, loss_bbox: 0.8207, loss_dfl: 0.4228, loss: 1.9830
Epoch [1][1050/7330] lr: 1.000e-02, eta: 2 days, 3:05:49, time: 2.143, data_time: 0.015, memory: 5297, loss_cls: 0.7372, loss_bbox: 0.7518, loss_dfl: 0.3984, loss: 1.8874
Epoch [1][1100/7330] lr: 1.000e-02, eta: 2 days, 3:04:36, time: 2.125, data_time: 0.014, memory: 5297, loss_cls: 0.7035, loss_bbox: 0.7030, loss_dfl: 0.3794, loss: 1.7859
Epoch [1][1150/7330] lr: 1.000e-02, eta: 2 days, 3:04:05, time: 2.137, data_time: 0.014, memory: 5297, loss_cls: 0.7069, loss_bbox: 0.6950, loss_dfl: 0.3751, loss: 1.7770
Epoch [1][1200/7330] lr: 1.000e-02, eta: 2 days, 3:01:46, time: 2.109, data_time: 0.014, memory: 5297, loss_cls: 0.6898, loss_bbox: 0.6882, loss_dfl: 0.3652, loss: 1.7432
Epoch [1][1250/7330] lr: 1.000e-02, eta: 2 days, 3:01:41, time: 2.146, data_time: 0.014, memory: 5297, loss_cls: 0.7046, loss_bbox: 0.6602, loss_dfl: 0.3585, loss: 1.7233
Epoch [1][1300/7330] lr: 1.000e-02, eta: 2 days, 3:01:25, time: 2.146, data_time: 0.014, memory: 5297, loss_cls: 0.7017, loss_bbox: 0.6467, loss_dfl: 0.3517, loss: 1.7002
Epoch [1][1350/7330] lr: 1.000e-02, eta: 2 days, 3:00:48, time: 2.141, data_time: 0.014, memory: 5297, loss_cls: 0.6947, loss_bbox: 0.6551, loss_dfl: 0.3569, loss: 1.7067
Epoch [1][1400/7330] lr: 1.000e-02, eta: 2 days, 2:59:48, time: 2.135, data_time: 0.015, memory: 5297, loss_cls: 0.6934, loss_bbox: 0.6324, loss_dfl: 0.3460, loss: 1.6717
Epoch [1][1450/7330] lr: 1.000e-02, eta: 2 days, 3:00:48, time: 2.177, data_time: 0.014, memory: 5297, loss_cls: 0.6893, loss_bbox: 0.6219, loss_dfl: 0.3487, loss: 1.6599
Epoch [1][1500/7330] lr: 1.000e-02, eta: 2 days, 2:59:29, time: 2.132, data_time: 0.014, memory: 5297, loss_cls: 0.7191, loss_bbox: 0.5938, loss_dfl: 0.3368, loss: 1.6498
...
Epoch [1][7300/7330] lr: 1.000e-02, eta: 2 days, 0:36:21, time: 2.201, data_time: 0.014, memory: 5297, loss_cls: 0.5064, loss_bbox: 0.4261, loss_dfl: 0.2592, loss: 1.1917
Saving checkpoint at 1 epochs
Evaluating bbox...
Epoch [1][7330/7330] lr: 1.000e-02, bbox_mAP: 0.2330, bbox_mAP_50: 0.3600, bbox_mAP_75: 0.2510, bbox_mAP_s: 0.1150, bbox_mAP_m: 0.2550, bbox_mAP_l: 0.3200, bbox_mAP_copypaste: 0.233 0.360 0.251 0.115 0.255 0.320
The loss_cls should not increase so much during warmup.
You can try:
optimizer_config = dict(grad_clip=None)
- lower learning rate
- longer warmup
frozen_stages=-1
(This is not limited for UniverseNet but might be helpful as the dataset images are not normal RGB images.)
I have tried according to your suggestion。But did't work too. my log:
sys.platform: linux Python: 3.7.4 (default, Jun 20 2020, 13:06:05) [GCC 7.5.0] CUDA available: True CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 10.1, V10.1.243 GPU 0: GeForce GTX 1080 Ti GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 PyTorch: 1.5.0 PyTorch compiling details: PyTorch built with:
model = dict( type='GFL', pretrained= './weights/universenet50_gfl_fp16_4x4_mstrain_480_960_2x_coco_20200729_epoch_24-c9308e66.pth', backbone=dict( type='Res2Net', depth=50, scales=4, base_width=26, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=-1, norm_cfg=dict(type='BN', requires_grad=True), norm_eval=True, style='pytorch', dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False), stage_with_dcn=(False, True, True, True)), neck=[ dict( type='FPN', in_channels=[256, 512, 1024, 2048], out_channels=256, start_level=1, add_extra_convs='on_output', num_outs=5), dict( type='SEPC', out_channels=256, stacked_convs=4, pconv_deform=True, lcconv_deform=True, ibn=False, lcconv_padding=1) ], bbox_head=dict( type='GFLSEPCHead', num_classes=10, in_channels=256, stacked_convs=0, feat_channels=256, anchor_generator=dict( type='AnchorGenerator', ratios=[1.0], octave_base_scale=8, scales_per_octave=1, strides=[8, 16, 32, 64, 128]), loss_cls=dict( type='QualityFocalLoss', use_sigmoid=True, beta=2.0, loss_weight=1.0), loss_dfl=dict(type='DistributionFocalLoss', loss_weight=0.25), reg_max=16, loss_bbox=dict(type='GIoULoss', loss_weight=2.0))) train_cfg = dict( assigner=dict(type='ATSSAssigner', topk=9), allowed_border=-1, pos_weight=-1, debug=False) test_cfg = dict( nms_pre=1000, min_bbox_size=0, score_thr=0.05, nms=dict(type='nms', iou_threshold=0.6), max_per_img=100) dataset_type = 'CocoDataset' data_root = '../../data/coco/' img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True), dict( type='Resize', img_scale=[(1333, 480), (1333, 960)], multiscale_mode='range', keep_ratio=True), dict(type='RandomFlip', flip_ratio=0.5), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) ] test_pipeline = [ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(1333, 800), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']) ]) ] data = dict( samples_per_gpu=4, workers_per_gpu=2, train=dict( type='CocoDataset', ann_file='../../data/coco/annotations/train.json', img_prefix='../../data/coco/train/', pipeline=[ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True), dict( type='Resize', img_scale=[(1333, 480), (1333, 960)], multiscale_mode='range', keep_ratio=True), dict(type='RandomFlip', flip_ratio=0.5), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) ]), val=dict( type='CocoDataset', ann_file='../../data/coco/annotations/val.json', img_prefix='../../data/coco/val/', pipeline=[ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(1333, 800), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']) ]) ]), test=dict( type='CocoDataset', ann_file='../../data/coco/annotations/val.json', pipeline=[ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(1333, 800), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']) ]) ])) evaluation = dict(interval=1, metric='bbox') optimizer = dict(type='SGD', lr=0.0025, momentum=0.9, weight_decay=0.0001) optimizer_config = dict(grad_clip=None) lr_config = dict( policy='step', warmup='linear', warmup_iters=500, warmup_ratio=0.001, step=[16, 22]) total_epochs = 24 checkpoint_config = dict(interval=1) log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')]) dist_params = dict(backend='nccl') log_level = 'INFO' load_from = None resume_from = None workflow = [('train', 1)] fp16 = dict(loss_scale=512.0) work_dir = './work_dirs/universenet50_gfl_fp16_4x4_mstrain_480_960_2x_coco' gpu_ids = range(0, 1)
2020-08-16 11:56:17,334 - mmdet - INFO - Start running, host: jovyan@pengbo-featurize, work_dir: /home/jovyan/work/UniverseNet-2/work_dirs/universenet50_gfl_fp16_4x4_mstrain_480_960_2x_coco 2020-08-16 11:56:17,334 - mmdet - INFO - workflow: [('train', 1)], max: 24 epochs 2020-08-16 11:57:49,338 - mmdet - INFO - Epoch [1][50/721] lr: 2.473e-04, eta: 8:47:57, time: 1.836, data_time: 0.060, memory: 7890, loss_cls: 0.1291, loss_bbox: 1.5079, loss_dfl: 0.7083, loss: 2.3452 2020-08-16 11:59:24,046 - mmdet - INFO - Epoch [1][100/721] lr: 4.970e-04, eta: 8:54:46, time: 1.894, data_time: 0.037, memory: 7890, loss_cls: 0.1321, loss_bbox: 1.4898, loss_dfl: 0.7079, loss: 2.3298 2020-08-16 12:00:59,984 - mmdet - INFO - Epoch [1][150/721] lr: 7.468e-04, eta: 8:58:20, time: 1.919, data_time: 0.082, memory: 7890, loss_cls: 0.1261, loss_bbox: 1.5138, loss_dfl: 0.7072, loss: 2.3471 2020-08-16 12:02:39,264 - mmdet - INFO - Epoch [1][200/721] lr: 9.965e-04, eta: 9:04:04, time: 1.986, data_time: 0.084, memory: 7890, loss_cls: 0.1357, loss_bbox: 1.4910, loss_dfl: 0.7061, loss: 2.3328 2020-08-16 12:04:14,130 - mmdet - INFO - Epoch [1][250/721] lr: 1.246e-03, eta: 9:01:50, time: 1.897, data_time: 0.023, memory: 7890, loss_cls: 0.1385, loss_bbox: 1.4924, loss_dfl: 0.7046, loss: 2.3355 2020-08-16 12:05:48,715 - mmdet - INFO - Epoch [1][300/721] lr: 1.496e-03, eta: 8:59:33, time: 1.892, data_time: 0.072, memory: 7890, loss_cls: 0.1477, loss_bbox: 1.4845, loss_dfl: 0.7027, loss: 2.3350 2020-08-16 12:07:29,486 - mmdet - INFO - Epoch [1][350/721] lr: 1.746e-03, eta: 9:02:28, time: 2.015, data_time: 0.104, memory: 7890, loss_cls: 0.1485, loss_bbox: 1.4890, loss_dfl: 0.7002, loss: 2.3377 2020-08-16 12:09:05,825 - mmdet - INFO - Epoch [1][400/721] lr: 1.996e-03, eta: 9:01:07, time: 1.927, data_time: 0.119, memory: 7890, loss_cls: 0.1447, loss_bbox: 1.4838, loss_dfl: 0.6964, loss: 2.3249 2020-08-16 12:10:38,035 - mmdet - INFO - Epoch [1][450/721] lr: 2.245e-03, eta: 8:57:08, time: 1.844, data_time: 0.029, memory: 7890, loss_cls: 0.1629, loss_bbox: 1.4842, loss_dfl: 0.6906, loss: 2.3377 2020-08-16 12:12:10,660 - mmdet - INFO - Epoch [1][500/721] lr: 2.495e-03, eta: 8:53:52, time: 1.853, data_time: 0.027, memory: 7890, loss_cls: 0.2882, loss_bbox: 1.3730, loss_dfl: 0.6552, loss: 2.3163 2020-08-16 12:13:46,277 - mmdet - INFO - Epoch [1][550/721] lr: 2.500e-03, eta: 8:52:26, time: 1.912, data_time: 0.088, memory: 7890, loss_cls: 0.6021, loss_bbox: 1.1051, loss_dfl: 0.5439, loss: 2.2511 2020-08-16 12:15:19,420 - mmdet - INFO - Epoch [1][600/721] lr: 2.500e-03, eta: 8:49:49, time: 1.863, data_time: 0.050, memory: 7890, loss_cls: 0.5690, loss_bbox: 1.1192, loss_dfl: 0.5359, loss: 2.2240 2020-08-16 12:16:56,822 - mmdet - INFO - Epoch [1][650/721] lr: 2.500e-03, eta: 8:49:12, time: 1.948, data_time: 0.036, memory: 7890, loss_cls: 0.5956, loss_bbox: 1.1069, loss_dfl: 0.5336, loss: 2.2360 2020-08-16 12:18:30,411 - mmdet - INFO - Epoch [1][700/721] lr: 2.500e-03, eta: 8:46:55, time: 1.872, data_time: 0.031, memory: 7890, loss_cls: 0.5476, loss_bbox: 1.1040, loss_dfl: 0.5268, loss: 2.1784 2020-08-16 12:19:10,323 - mmdet - INFO - Saving checkpoint at 1 epochs [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 720/720, 6.4 task/s, elapsed: 112s, ETA: 0s2020-08-16 12:21:12,590 - mmdet - INFO - Evaluating bbox... Loading and preparing results... DONE (t=0.77s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=6.33s). Accumulating evaluation results... DONE (t=3.35s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.001 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.009 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.009 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.009 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.003 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.027 2020-08-16 12:21:23,175 - mmdet - INFO - Epoch [1][721/721] lr: 2.500e-03, bbox_mAP: 0.0000, bbox_mAP_50: 0.0010, bbox_mAP_75: 0.0000, bbox_mAP_s: 0.0000, bbox_mAP_m: 0.0000, bbox_mAP_l: 0.0000, bbox_mAP_copypaste: 0.000 0.001 0.000 0.000 0.000 0.000
Is the network sensitive to the dataset?
Then I set samples per gpu=1 and num of workers =1 , when I am training ,got an error : input image is smaller than kernel。 so, Why does this problem appear during training ?
pretrained=
'./weights/universenet50_gfl_fp16_4x4_mstrain_480_960_2x_coco_20200729_epoch_24-c9308e66.pth',
This setting is wrong. In mmdetection, pretrained
is used for backbones.
Please use load_from
instead.
pretrained= './weights/universenet50_gfl_fp16_4x4_mstrain_480_960_2x_coco_20200729_epoch_24-c9308e66.pth',
This setting is wrong. In mmdetection,
pretrained
is used for backbones. Please useload_from
instead.
Thank you very much for your patient answers !!!! It works well now.
hello, The config file I am using is default . And the pretrained I am using is coco pretrained weight you shared. I set the num_classes =10 , and the coco.py was modified at the same time. but get very poor result . AP is zero or just 0.001 . I don't know if I do something wrong. (ps: I have tried the same hyperparameter settings in another object detection network in mmdetection , but it works well )