open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.21k stars 9.4k forks source link

loss nan when finetune on Faster R-CNN #3380

Closed ShidiDaisy closed 4 years ago

ShidiDaisy commented 4 years ago

I'm trying to finetune faster_rcnn_r50_fpn_1x_voc0712 on a custom dataset. This custom dataset got the same format as VOC. When I include this faster rcnn pretrained weight (https://github.com/open-mmlab/mmdetection/tree/master/configs/pascal_voc) in load_from, I get a loss: nan issue. If I remove the pre-trained model from load_from, then I got no issue with training.

Here is the detail of my setting and error: 2020-07-21 08:50:21,652 - mmdet - INFO - Environment info:

sys.platform: linux Python: 3.7.7 (default, May 7 2020, 21:25:33) [GCC 7.3.0] CUDA available: True CUDA_HOME: /usr/local/cuda-10.1 NVCC: Cuda compilation tools, release 10.1, V10.1.243 GPU 0: Tesla K80 GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 PyTorch: 1.5.1 PyTorch compiling details: PyTorch built with:

torchVision: 0.6.0a0+35d732a OpenCV: 4.2.0 MMCV: 1.0.2 MMDetection: 2.3.0rc0+d613f21 MMDetection Compiler: GCC 7.5 MMDetection CUDA Compiler: 10.1

2020-07-21 08:50:21,652 - mmdet - INFO - Distributed training: False 2020-07-21 08:50:21,999 - mmdet - INFO - Config: model = dict( type='FasterRCNN', pretrained='torchvision://resnet50', backbone=dict( type='ResNet', depth=50, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, norm_cfg=dict(type='BN', requires_grad=True), norm_eval=True, style='pytorch'), neck=dict( type='FPN', in_channels=[256, 512, 1024, 2048], out_channels=256, num_outs=5), rpn_head=dict( type='RPNHead', in_channels=256, feat_channels=256, anchor_generator=dict( type='AnchorGenerator', scales=[8], ratios=[0.5, 1.0, 2.0], strides=[4, 8, 16, 32, 64]), bbox_coder=dict( type='DeltaXYWHBBoxCoder', target_means=[0.0, 0.0, 0.0, 0.0], target_stds=[1.0, 1.0, 1.0, 1.0]), loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), loss_bbox=dict(type='GIoULoss', loss_weight=1.0)), roi_head=dict( type='StandardRoIHead', bbox_roi_extractor=dict( type='SingleRoIExtractor', roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), out_channels=256, featmap_strides=[4, 8, 16, 32]), bbox_head=dict( type='Shared2FCBBoxHead', in_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=20, bbox_coder=dict( type='DeltaXYWHBBoxCoder', target_means=[0.0, 0.0, 0.0, 0.0], target_stds=[0.1, 0.1, 0.2, 0.2]), reg_class_agnostic=False, loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type='GIoULoss', loss_weight=1.0)))) train_cfg = dict( rpn=dict( assigner=dict( type='MaxIoUAssigner', pos_iou_thr=0.7, neg_iou_thr=0.3, min_pos_iou=0.3, match_low_quality=True, ignore_iof_thr=-1), sampler=dict( type='RandomSampler', num=256, pos_fraction=0.5, neg_pos_ub=-1, add_gt_as_proposals=False), allowed_border=-1, pos_weight=-1, debug=False), rpn_proposal=dict( nms_across_levels=False, nms_pre=2000, nms_post=1000, max_num=1000, nms_thr=0.7, min_bbox_size=0), rcnn=dict( assigner=dict( type='MaxIoUAssigner', pos_iou_thr=0.5, neg_iou_thr=0.5, min_pos_iou=0.5, match_low_quality=False, ignore_iof_thr=-1), sampler=dict( type='RandomSampler', num=512, pos_fraction=0.25, neg_pos_ub=-1, add_gt_as_proposals=True), pos_weight=-1, debug=False)) test_cfg = dict( rpn=dict( nms_across_levels=False, nms_pre=1000, nms_post=1000, max_num=1000, nms_thr=0.7, min_bbox_size=0), rcnn=dict( score_thr=0.05, nms=dict(type='nms', iou_threshold=0.5), max_per_img=100)) dataset_type = 'VOCDataset' data_root = 'data/AscentData/' img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True), dict(type='Resize', img_scale=(1000, 600), keep_ratio=True), dict(type='RandomFlip', flip_ratio=0.5), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) ] test_pipeline = [ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(1000, 600), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']) ]) ] data = dict( samples_per_gpu=2, workers_per_gpu=2, train=dict( type='RepeatDataset', times=3, dataset=dict( type='VOCDataset', ann_file='data/AscentData/VOC2012/ImageSets/Main/trainval.txt', img_prefix='data/AscentData/VOC2012/', pipeline=[ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True), dict(type='Resize', img_scale=(1000, 600), keep_ratio=True), dict(type='RandomFlip', flip_ratio=0.5), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) ])), val=dict( type='VOCDataset', ann_file='data/AscentData/VOC2012/ImageSets/Main/test.txt', img_prefix='data/AscentData/VOC2012/', pipeline=[ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(1000, 600), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']) ]) ]), test=dict( type='VOCDataset', ann_file='data/AscentData/VOC2012/ImageSets/Main/test.txt', img_prefix='data/AscentData/VOC2012/', pipeline=[ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(1000, 600), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']) ]) ])) evaluation = dict(interval=1, metric='mAP') checkpoint_config = dict(interval=1) log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')]) dist_params = dict(backend='nccl') log_level = 'INFO' load_from = 'checkpoints/faster_rcnn_r50_fpn_1x_voc0712_20200624-c9895d40.pth' resume_from = None workflow = [('train', 1)] optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) optimizer_config = dict(grad_clip=None) lr_config = dict(policy='step', step=[3]) total_epochs = 1 work_dir = './work_dirs/faster_rcnn_r50_fpn_1x_voc0712' gpu_ids = range(0, 1)

2020-07-21 08:50:22,420 - mmdet - INFO - load model from: torchvision://resnet50 2020-07-21 08:50:22,659 - mmdet - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: fc.weight, fc.bias

2020-07-21 08:50:24,813 - mmdet - INFO - load checkpoint from checkpoints/faster_rcnn_r50_fpn_1x_voc0712_20200624-c9895d40.pth 2020-07-21 08:50:24,967 - mmdet - INFO - Start running, host: ubuntu@ip-10-0-0-163, work_dir: /home/ubuntu/mmdetection/work_dirs/faster_rcnn_r50_fpn_1x_voc0712 2020-07-21 08:50:24,967 - mmdet - INFO - workflow: [('train', 1)], max: 1 epochs 2020-07-21 08:51:03,435 - mmdet - INFO - Epoch [1][50/3522] lr: 1.000e-02, eta: 0:44:17, time: 0.765, data_time: 0.047, memory: 1991, loss_rpn_cls: nan, loss_rpn_bbox: nan, loss_cls: nan, acc: 1.9453, loss_bbox: nan, loss: nan 2020-07-21 08:51:39,474 - mmdet - INFO - Epoch [1][100/3522] lr: 1.000e-02, eta: 0:42:22, time: 0.721, data_time: 0.008, memory: 1991, loss_rpn_cls: nan, loss_rpn_bbox: nan, loss_cls: nan, acc: 0.0000, loss_bbox: nan, loss: nan 2020-07-21 08:52:15,775 - mmdet - INFO - Epoch [1][150/3522] lr: 1.000e-02, eta: 0:41:26, time: 0.726, data_time: 0.008, memory: 1991, loss_rpn_cls: nan, loss_rpn_bbox: nan, loss_cls: nan, acc: 0.0000, loss_bbox: nan, loss: nan 2020-07-21 08:52:52,069 - mmdet - INFO - Epoch [1][200/3522] lr: 1.000e-02, eta: 0:40:39, time: 0.726, data_time: 0.009, memory: 1991, loss_rpn_cls: nan, loss_rpn_bbox: nan, loss_cls: nan, acc: 0.0000, loss_bbox: nan, loss: nan 2020-07-21 08:53:28,552 - mmdet - INFO - Epoch [1][250/3522] lr: 1.000e-02, eta: 0:40:00, time: 0.730, data_time: 0.008, memory: 1991, loss_rpn_cls: nan, loss_rpn_bbox: nan, loss_cls: nan, acc: 0.0000, loss_bbox: nan, loss: nan

Is anyone has any idea of this issue?

hellock commented 4 years ago

Please refer to the documentation. image