aim-uofa / AdelaiDet

AdelaiDet is an open source toolbox for multiple instance-level detection and recognition tasks.
https://git.io/AdelaiDet
Other
3.38k stars 648 forks source link

Train ABCNet got loss became NAN:Loss became infinite or NaN at iteration=39! #238

Closed dagongji10 closed 3 years ago

dagongji10 commented 3 years ago

I use ABCNet to train my own dataset. The dataset sample just like: 1 Use abcnet_custom_dataset_example_v2 to annotation and check it is right format. Config as follow:

_BASE_: "Base-CTW1500.yaml"
MODEL:
  WEIGHTS: "weights/batext/pretrain_attn_R_50.pth"
  RESNETS:
    DEPTH: 50
  BATEXT:
    RECOGNIZER: "attn" # "attn" "rnn"
SOLVER:
  IMS_PER_BATCH: 1
  BASE_LR: 0.0001
  STEPS: (80000,)
  MAX_ITER: 120000
  CHECKPOINT_PERIOD: 10000
  WARMUP_ITERS: 1000
#TEST:
#  EVAL_PERIOD: 100
OUTPUT_DIR: "output/batext/ctw1500/attn_R_50"

[11/22 08:32:05 detectron2]: Running with full config:
CUDNN_BENCHMARK: False
DATALOADER:
  ASPECT_RATIO_GROUPING: True
  FILTER_EMPTY_ANNOTATIONS: True
  NUM_WORKERS: 4
  REPEAT_THRESHOLD: 0.0
  SAMPLER_TRAIN: TrainingSampler
DATASETS:
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
  PRECOMPUTED_PROPOSAL_TOPK_TRAIN: 2000
  PROPOSAL_FILES_TEST: ()
  PROPOSAL_FILES_TRAIN: ()
  TEST: ('my_page_dataset_test',)
  TRAIN: ('my_page_dataset_train',)
GLOBAL:
  HACK: 1.0
INPUT:
  CROP:
    CROP_INSTANCE: False
    ENABLED: True
    SIZE: [0.1, 0.1]
    TYPE: relative_range
  FORMAT: BGR
  HFLIP_TRAIN: False
  MASK_FORMAT: polygon
  MAX_SIZE_TEST: 1024
  MAX_SIZE_TRAIN: 1600
  MIN_SIZE_TEST: 800
  MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800, 832, 864, 896)
  MIN_SIZE_TRAIN_SAMPLING: choice
MODEL:
  ANCHOR_GENERATOR:
    ANGLES: [[-90, 0, 90]]
    ASPECT_RATIOS: [[0.5, 1.0, 2.0]]
    NAME: DefaultAnchorGenerator
    OFFSET: 0.0
    SIZES: [[32, 64, 128, 256, 512]]
  BACKBONE:
    ANTI_ALIAS: False
    FREEZE_AT: 2
    NAME: build_fcos_resnet_fpn_backbone
  BASIS_MODULE:
    ANN_SET: coco
    COMMON_STRIDE: 8
    CONVS_DIM: 128
    IN_FEATURES: ['p3', 'p4', 'p5']
    LOSS_ON: False
    LOSS_WEIGHT: 0.3
    NAME: ProtoNet
    NORM: SyncBN
    NUM_BASES: 4
    NUM_CLASSES: 80
    NUM_CONVS: 3
  BATEXT:
    CANONICAL_SIZE: 96
    CONV_DIM: 256
    IN_FEATURES: ['p2', 'p3', 'p4']
    NUM_CHARS: 30
    NUM_CONV: 2
    POOLER_RESOLUTION: (8, 128)
    POOLER_SCALES: (0.25, 0.125, 0.0625)
    RECOGNITION_LOSS: ctc
    RECOGNIZER: attn
    SAMPLING_RATIO: 1
    VOC_SIZE: 84
  BLENDMASK:
    ATTN_SIZE: 14
    BOTTOM_RESOLUTION: 56
    INSTANCE_LOSS_WEIGHT: 1.0
    POOLER_SAMPLING_RATIO: 1
    POOLER_SCALES: (0.25,)
    POOLER_TYPE: ROIAlignV2
    TOP_INTERP: bilinear
    VISUALIZE: False
  BiFPN:
    IN_FEATURES: ['res2', 'res3', 'res4', 'res5']
    NORM:
    NUM_REPEATS: 6
    OUT_CHANNELS: 160
  CONDINST:
    MASK_BRANCH:
      CHANNELS: 128
      IN_FEATURES: ['p3', 'p4', 'p5']
      NORM: BN
      NUM_CONVS: 4
      OUT_CHANNELS: 8
      SEMANTIC_LOSS_ON: False
    MASK_HEAD:
      CHANNELS: 8
      DISABLE_REL_COORDS: False
      NUM_LAYERS: 3
      USE_FP16: False
    MASK_OUT_STRIDE: 4
    MAX_PROPOSALS: -1
  DEVICE: cuda
  DLA:
    CONV_BODY: DLA34
    NORM: FrozenBN
    OUT_FEATURES: ['stage2', 'stage3', 'stage4', 'stage5']
  FCOS:
    CENTER_SAMPLE: True
    FPN_STRIDES: [8, 16, 32, 64, 128]
    INFERENCE_TH_TEST: 0.6
    INFERENCE_TH_TRAIN: 0.05
    IN_FEATURES: ['p3', 'p4', 'p5', 'p6', 'p7']
    LOC_LOSS_TYPE: giou
    LOSS_ALPHA: 0.25
    LOSS_GAMMA: 2.0
    NMS_TH: 0.5
    NORM: GN
    NUM_BOX_CONVS: 4
    NUM_CLASSES: 1
    NUM_CLS_CONVS: 4
    NUM_SHARE_CONVS: 0
    POST_NMS_TOPK_TEST: 100
    POST_NMS_TOPK_TRAIN: 100
    POS_RADIUS: 1.5
    PRE_NMS_TOPK_TEST: 1000
    PRE_NMS_TOPK_TRAIN: 1000
    PRIOR_PROB: 0.01
    SIZES_OF_INTEREST: [64, 128, 256, 512]
    THRESH_WITH_CTR: False
    TOP_LEVELS: 2
    USE_DEFORMABLE: False
    USE_RELU: True
    USE_SCALE: False
    YIELD_PROPOSAL: False
  FPN:
    FUSE_TYPE: sum
    IN_FEATURES: ['res2', 'res3', 'res4', 'res5']
    NORM:
    OUT_CHANNELS: 256
  KEYPOINT_ON: False
  LOAD_PROPOSALS: False
  MASK_ON: False
  MEInst:
    AGNOSTIC: True
    CENTER_SAMPLE: True
    DIM_MASK: 60
    FLAG_PARAMETERS: False
    FPN_STRIDES: [8, 16, 32, 64, 128]
    GCN_KERNEL_SIZE: 9
    INFERENCE_TH_TEST: 0.05
    INFERENCE_TH_TRAIN: 0.05
    IN_FEATURES: ['p3', 'p4', 'p5', 'p6', 'p7']
    IOU_LABELS: [0, 1]
    IOU_THRESHOLDS: [0.5]
    LAST_DEFORMABLE: False
    LOC_LOSS_TYPE: giou
    LOSS_ALPHA: 0.25
    LOSS_GAMMA: 2.0
    LOSS_ON_MASK: False
    MASK_LOSS_TYPE: mse
    MASK_ON: True
    MASK_SIZE: 28
    NMS_TH: 0.6
    NORM: GN
    NUM_BOX_CONVS: 4
    NUM_CLASSES: 80
    NUM_CLS_CONVS: 4
    NUM_MASK_CONVS: 4
    NUM_SHARE_CONVS: 0
    PATH_COMPONENTS: datasets/coco/components/coco_2017_train_class_agnosticTrue_whitenTrue_sigmoidTrue_60.npz
    POST_NMS_TOPK_TEST: 100
    POST_NMS_TOPK_TRAIN: 100
    POS_RADIUS: 1.5
    PRE_NMS_TOPK_TEST: 1000
    PRE_NMS_TOPK_TRAIN: 1000
    PRIOR_PROB: 0.01
    SIGMOID: True
    SIZES_OF_INTEREST: [64, 128, 256, 512]
    THRESH_WITH_CTR: False
    TOP_LEVELS: 2
    TYPE_DEFORMABLE: DCNv1
    USE_DEFORMABLE: False
    USE_GCN_IN_MASK: False
    USE_RELU: True
    USE_SCALE: True
    WHITEN: True
  META_ARCHITECTURE: OneStageRCNN
  MOBILENET: False
  PANOPTIC_FPN:
    COMBINE:
      ENABLED: True
      INSTANCES_CONFIDENCE_THRESH: 0.5
      OVERLAP_THRESH: 0.5
      STUFF_AREA_LIMIT: 4096
    INSTANCE_LOSS_WEIGHT: 1.0
  PIXEL_MEAN: [103.53, 116.28, 123.675]
  PIXEL_STD: [1.0, 1.0, 1.0]
  PROPOSAL_GENERATOR:
    MIN_SIZE: 0
    NAME: BAText
  RESNETS:
    DEFORM_INTERVAL: 1
    DEFORM_MODULATED: False
    DEFORM_NUM_GROUPS: 1
    DEFORM_ON_PER_STAGE: [False, False, False, False]
    DEPTH: 50
    NORM: FrozenBN
    NUM_GROUPS: 1
    OUT_FEATURES: ['res2', 'res3', 'res4', 'res5']
    RES2_OUT_CHANNELS: 256
    RES5_DILATION: 1
    STEM_OUT_CHANNELS: 64
    STRIDE_IN_1X1: True
    WIDTH_PER_GROUP: 64
  RETINANET:
    BBOX_REG_WEIGHTS: (1.0, 1.0, 1.0, 1.0)
    FOCAL_LOSS_ALPHA: 0.25
    FOCAL_LOSS_GAMMA: 2.0
    IN_FEATURES: ['p3', 'p4', 'p5', 'p6', 'p7']
    IOU_LABELS: [0, -1, 1]
    IOU_THRESHOLDS: [0.4, 0.5]
    NMS_THRESH_TEST: 0.5
    NUM_CLASSES: 80
    NUM_CONVS: 4
    PRIOR_PROB: 0.01
    SCORE_THRESH_TEST: 0.05
    SMOOTH_L1_LOSS_BETA: 0.1
    TOPK_CANDIDATES_TEST: 1000
  ROI_BOX_CASCADE_HEAD:
    BBOX_REG_WEIGHTS: ((10.0, 10.0, 5.0, 5.0), (20.0, 20.0, 10.0, 10.0), (30.0, 30.0, 15.0, 15.0))
    IOUS: (0.5, 0.6, 0.7)
  ROI_BOX_HEAD:
    BBOX_REG_LOSS_TYPE: smooth_l1
    BBOX_REG_LOSS_WEIGHT: 1.0
    BBOX_REG_WEIGHTS: (10.0, 10.0, 5.0, 5.0)
    CLS_AGNOSTIC_BBOX_REG: False
    CONV_DIM: 256
    FC_DIM: 1024
    NAME:
    NORM:
    NUM_CONV: 0
    NUM_FC: 0
    POOLER_RESOLUTION: 14
    POOLER_SAMPLING_RATIO: 0
    POOLER_TYPE: ROIAlignV2
    SMOOTH_L1_BETA: 0.0
    TRAIN_ON_PRED_BOXES: False
  ROI_HEADS:
    BATCH_SIZE_PER_IMAGE: 512
    IN_FEATURES: ['res4']
    IOU_LABELS: [0, 1]
    IOU_THRESHOLDS: [0.5]
    NAME: TextHead
    NMS_THRESH_TEST: 0.5
    NUM_CLASSES: 80
    POSITIVE_FRACTION: 0.25
    PROPOSAL_APPEND_GT: True
    SCORE_THRESH_TEST: 0.05
  ROI_KEYPOINT_HEAD:
    CONV_DIMS: (512, 512, 512, 512, 512, 512, 512, 512)
    LOSS_WEIGHT: 1.0
    MIN_KEYPOINTS_PER_IMAGE: 1
    NAME: KRCNNConvDeconvUpsampleHead
    NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS: True
    NUM_KEYPOINTS: 17
    POOLER_RESOLUTION: 14
    POOLER_SAMPLING_RATIO: 0
    POOLER_TYPE: ROIAlignV2
  ROI_MASK_HEAD:
    CLS_AGNOSTIC_MASK: False
    CONV_DIM: 256
    NAME: MaskRCNNConvUpsampleHead
    NORM:
    NUM_CONV: 0
    POOLER_RESOLUTION: 14
    POOLER_SAMPLING_RATIO: 0
    POOLER_TYPE: ROIAlignV2
  RPN:
    BATCH_SIZE_PER_IMAGE: 256
    BBOX_REG_LOSS_TYPE: smooth_l1
    BBOX_REG_LOSS_WEIGHT: 1.0
    BBOX_REG_WEIGHTS: (1.0, 1.0, 1.0, 1.0)
    BOUNDARY_THRESH: -1
    HEAD_NAME: StandardRPNHead
    IN_FEATURES: ['res4']
    IOU_LABELS: [0, -1, 1]
    IOU_THRESHOLDS: [0.3, 0.7]
    LOSS_WEIGHT: 1.0
    NMS_THRESH: 0.7
    POSITIVE_FRACTION: 0.5
    POST_NMS_TOPK_TEST: 1000
    POST_NMS_TOPK_TRAIN: 2000
    PRE_NMS_TOPK_TEST: 6000
    PRE_NMS_TOPK_TRAIN: 12000
    SMOOTH_L1_BETA: 0.0
  SEM_SEG_HEAD:
    COMMON_STRIDE: 4
    CONVS_DIM: 128
    IGNORE_VALUE: 255
    IN_FEATURES: ['p2', 'p3', 'p4', 'p5']
    LOSS_WEIGHT: 1.0
    NAME: SemSegFPNHead
    NORM: GN
    NUM_CLASSES: 54
  TOP_MODULE:
    DIM: 16
    NAME: conv
  VOVNET:
    BACKBONE_OUT_CHANNELS: 256
    CONV_BODY: V-39-eSE
    NORM: FrozenBN
    OUT_CHANNELS: 256
    OUT_FEATURES: ['stage2', 'stage3', 'stage4', 'stage5']
  WEIGHTS: demo_models/ctw1500_attn_R_50.pth
OUTPUT_DIR: output/batext/ctw1500/attn_R_50
SEED: -1
SOLVER:
  BASE_LR: 0.0001
  BIAS_LR_FACTOR: 1.0
  CHECKPOINT_PERIOD: 10000
  CLIP_GRADIENTS:
    CLIP_TYPE: value
    CLIP_VALUE: 1.0
    ENABLED: True
    NORM_TYPE: 2.0
  GAMMA: 0.1
  IMS_PER_BATCH: 1
  LR_SCHEDULER_NAME: WarmupMultiStepLR
  MAX_ITER: 120000
  MOMENTUM: 0.9
  NESTEROV: False
  REFERENCE_WORLD_SIZE: 0
  STEPS: (80000,)
  WARMUP_FACTOR: 0.001
  WARMUP_ITERS: 1000
  WARMUP_METHOD: linear
  WEIGHT_DECAY: 0.0001
  WEIGHT_DECAY_BIAS: 0.0001
  WEIGHT_DECAY_NORM: 0.0
TEST:
  AUG:
    ENABLED: False
    FLIP: True
    MAX_SIZE: 4000
    MIN_SIZES: (400, 500, 600, 700, 800, 900, 1000, 1100, 1200)
  DETECTIONS_PER_IMAGE: 100
  EVAL_PERIOD: 0
  EXPECTED_RESULTS: []
  KEYPOINT_OKS_SIGMAS: []
  PRECISE_BN:
    ENABLED: False
    NUM_ITER: 200
VERSION: 2
VIS_PERIOD: 0

I also try to change LOSS_WEIGHT to 0.5~0.8, BASE_LR to 0.00001, but still get the same problem:

[11/22 08:32:18 d2.utils.events]:  eta: 7:22:19  iter: 19  total_loss: 10.594  rec_loss: 5.713  loss_fcos_cls: 1.198  los: 0.690  loss_fcos_bezier: 1.983  time: 0.2649  data_time: 0.0257  lr: 0.000002  max_mem: 2018M
Traceback (most recent call last):
  File "tools/train_net.py", line 237, in <module>
    launch(
  File "/root/anaconda3/lib/python3.8/site-packages/detectron2/engine/launch.py", line 62, in launch
    main_func(*args)
  File "tools/train_net.py", line 231, in main
    return trainer.train()
  File "tools/train_net.py", line 113, in train
    self.train_loop(self.start_iter, self.max_iter)
  File "tools/train_net.py", line 102, in train_loop
    self.run_step()
  File "/root/anaconda3/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 243, in run_step
    self._detect_anomaly(losses, loss_dict)
  File "/root/anaconda3/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 254, in _detect_anomaly
    raise FloatingPointError(
FloatingPointError: Loss became infinite or NaN at iteration=39!
loss_dict = {'rec_loss': tensor(nan, device='cuda:0', grad_fn=<MulBackward0>), 'loss_fcos_cls': tensor(1.0985, device='cu'loss_fcos_loc': tensor(0.9550, device='cuda:0', grad_fn=<DivBackward0>), 'loss_fcos_ctr': tensor(0.6931, device='cuda:0'_fcos_bezier': tensor(1.8643, device='cuda:0', grad_fn=<DivBackward0>), 'data_time': 0.004210205050185323}

Can anyone help me with this problem or give me some advise?

Yuliang-Liu commented 3 years ago

@dagongji10 Did you use provided pretrained model? Have you tried using a larger batch size?

dagongji10 commented 3 years ago

@Yuliang-Liu The result is all Nan-loss whether use or not use pretrained model. My batch-size is 1 because I have only 1 GPU.

Drangonliao123 commented 3 years ago

Has your problem been solved, I also encountered a similar situation?

FloatingPointError: Loss became infinite or NaN at iteration=77389! loss_dict = {'rec_loss': 4.122969746589661,'loss_fcos_cls': nan,'loss_fcos_loc': 0.6235280930995941,'loss_fcos_ctr': 0.6824557185173035,'loss_fcos_bezier': 1.2363877594470978}

Is there any way to solve the above problems? I hope you can advise me, thanks! @dagongji10 @Yuliang-Liu

dagongji10 commented 3 years ago

@Drangonliao123 I only got this problem when I use Chinese-English-Mix handwritten dataset. When I change my dataset with only Chinese, the problem missed. So, I think it's because my dataset quality is too low, Chinese-English-Mix handwritten data is messy, even I can't recognize what it is. Maybe you can check your data image and ensure you can recognize the text by yourself.

Drangonliao123 commented 3 years ago

thank you! After I try to change the learning rate to 0.0001, there is no error! But I think you have a point! Thanks again!