chensnathan / YOLOF

You Only Look One-level Feature (YOLOF), CVPR2021, Detectron2
MIT License
271 stars 28 forks source link

训练的时候loss变成nan了. #9

Closed zyrant closed 3 years ago

zyrant commented 3 years ago

[04/01 18:48:18 detectron2]: Full config saved to output/yolof/CSP_D_53_DC5_3x/config.yaml [04/01 18:48:18 d2.utils.env]: Using a generated random seed 18400952

YOLOF( (backbone): DarkNet( (conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(32, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (act1): MishCuda() (layer1): CrossStagePartialBlock( (base_layer): Sequential( (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm2d(64, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (partial_transition1): Sequential( (0): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(64, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (stage_layers): Sequential( (0): DarkBlock( (downsample): Sequential( (0): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(64, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (bn1): BatchNorm2d(32, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(64, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) ) (partial_transition2): Sequential( (0): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(64, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (fuse_transition): Sequential( (0): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(64, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) ) (layer2): CrossStagePartialBlock( (base_layer): Sequential( (0): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (partial_transition1): Sequential( (0): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(64, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (stage_layers): Sequential( (0): DarkBlock( (downsample): Sequential( (0): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(64, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (bn1): BatchNorm2d(64, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(64, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) (1): DarkBlock( (bn1): BatchNorm2d(64, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(64, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) ) (partial_transition2): Sequential( (0): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(64, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (fuse_transition): Sequential( (0): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) ) (layer3): CrossStagePartialBlock( (base_layer): Sequential( (0): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (partial_transition1): Sequential( (0): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (stage_layers): Sequential( (0): DarkBlock( (downsample): Sequential( (0): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (bn1): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) (1): DarkBlock( (bn1): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) (2): DarkBlock( (bn1): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) (3): DarkBlock( (bn1): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) (4): DarkBlock( (bn1): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) (5): DarkBlock( (bn1): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) (6): DarkBlock( (bn1): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) (7): DarkBlock( (bn1): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) ) (partial_transition2): Sequential( (0): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(128, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (fuse_transition): Sequential( (0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) ) (layer4): CrossStagePartialBlock( (base_layer): Sequential( (0): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm2d(512, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (partial_transition1): Sequential( (0): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (stage_layers): Sequential( (0): DarkBlock( (downsample): Sequential( (0): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (bn1): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) (1): DarkBlock( (bn1): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) (2): DarkBlock( (bn1): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) (3): DarkBlock( (bn1): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) (4): DarkBlock( (bn1): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) (5): DarkBlock( (bn1): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) (6): DarkBlock( (bn1): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) (7): DarkBlock( (bn1): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): MishCuda() ) ) (partial_transition2): Sequential( (0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(256, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (fuse_transition): Sequential( (0): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(512, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) ) (layer5): CrossStagePartialBlock( (base_layer): Sequential( (0): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False) (1): BatchNorm2d(1024, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (partial_transition1): Sequential( (0): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(512, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (stage_layers): Sequential( (0): DarkBlock( (downsample): Sequential( (0): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(512, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (bn1): BatchNorm2d(512, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(512, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False) (activation): MishCuda() ) (1): DarkBlock( (bn1): BatchNorm2d(512, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(512, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False) (activation): MishCuda() ) (2): DarkBlock( (bn1): BatchNorm2d(512, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(512, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False) (activation): MishCuda() ) (3): DarkBlock( (bn1): BatchNorm2d(512, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (bn2): BatchNorm2d(512, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (conv1): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False) (activation): MishCuda() ) ) (partial_transition2): Sequential( (0): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(512, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) (fuse_transition): Sequential( (0): Conv2d(1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(1024, eps=0.0001, momentum=0.03, affine=True, track_running_stats=True) (2): MishCuda() ) ) ) (encoder): DilatedEncoder( (lateral_conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1)) (lateral_norm): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (fpn_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_norm): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (dilated_encoder_blocks): Sequential( (0): Bottleneck( (conv1): Sequential( (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) (conv2): Sequential( (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) (conv3): Sequential( (0): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) ) (1): Bottleneck( (conv1): Sequential( (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) (conv2): Sequential( (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) (conv3): Sequential( (0): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) ) (2): Bottleneck( (conv1): Sequential( (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) (conv2): Sequential( (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(3, 3), dilation=(3, 3)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) (conv3): Sequential( (0): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) ) (3): Bottleneck( (conv1): Sequential( (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) (conv2): Sequential( (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) (conv3): Sequential( (0): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) ) (4): Bottleneck( (conv1): Sequential( (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) (conv2): Sequential( (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(5, 5), dilation=(5, 5)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) (conv3): Sequential( (0): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) ) (5): Bottleneck( (conv1): Sequential( (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) (conv2): Sequential( (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(6, 6), dilation=(6, 6)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) (conv3): Sequential( (0): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) ) (6): Bottleneck( (conv1): Sequential( (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) (conv2): Sequential( (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(7, 7), dilation=(7, 7)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) (conv3): Sequential( (0): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) ) (7): Bottleneck( (conv1): Sequential( (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) (conv2): Sequential( (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(8, 8), dilation=(8, 8)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) (conv3): Sequential( (0): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) ) ) ) ) (decoder): Decoder( (cls_subnet): Sequential( (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) (3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): LeakyReLU(negative_slope=0.1, inplace=True) ) (bbox_subnet): Sequential( (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.1, inplace=True) (3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): LeakyReLU(negative_slope=0.1, inplace=True) (6): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (7): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (8): LeakyReLU(negative_slope=0.1, inplace=True) (9): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (10): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (11): LeakyReLU(negative_slope=0.1, inplace=True) ) (cls_score): Conv2d(512, 6, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (bbox_pred): Conv2d(512, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (object_pred): Conv2d(512, 6, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (anchor_generator): DefaultAnchorGenerator( (cell_anchors): BufferList() ) (anchor_matcher): UniformMatcher() ) WARNING [04/01 18:48:25 d2.data.datasets.coco]: Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

[04/01 18:48:25 d2.data.datasets.coco]: Loaded 60 images in COCO format from /home/zy/Downloads/AnchorFreeDet-master/datasets/balloon/annotations/instances_train.json [04/01 18:48:25 d2.data.build]: Removed 0 images with no usable annotations. 60 images left. [04/01 18:48:25 d2.data.build]: Distribution of instances among all 1 categories: category #instances
balloon 254

[04/01 18:48:25 d2.data.build]: Using training sampler TrainingSampler [04/01 18:48:25 d2.data.common]: Serializing 60 elements to byte tensors and concatenating them all ... [04/01 18:48:25 d2.data.common]: Serialized dataset takes 0.02 MiB [04/01 18:48:25 fvcore.common.checkpoint]: Loading checkpoint from /home/zy/Downloads/AnchorFreeDet-master/pretrained_models/YOLOF_CSP_D_53_DC5_9x_stage2_3x.pth WARNING [04/01 18:48:26 fvcore.common.checkpoint]: Skip loading parameter 'decoder.cls_score.weight' to the model due to incompatible shapes: (480, 512, 3, 3) in the checkpoint but (6, 512, 3, 3) in the model! You might want to double check if this is expected. WARNING [04/01 18:48:26 fvcore.common.checkpoint]: Skip loading parameter 'decoder.cls_score.bias' to the model due to incompatible shapes: (480,) in the checkpoint but (6,) in the model! You might want to double check if this is expected. [04/01 18:48:26 fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint: decoder.cls_score.{bias, weight} [04/01 18:48:26 d2.engine.train_loop]: Starting training from iteration 0 [04/01 18:48:36 d2.utils.events]: eta: 6:57:44 iter: 19 total_loss: 1.848 loss_cls: 1.525 loss_box_reg: 0.3422 time: 0.4429 data_time: 0.0343 lr: 0.000533 max_mem: 1699M [04/01 18:48:43 d2.utils.events]: eta: 6:51:51 iter: 39 total_loss: 1.909 loss_cls: 1.26 loss_box_reg: 0.7277 time: 0.3926 data_time: 0.0077 lr: 0.001066 max_mem: 1699M [04/01 18:48:51 d2.utils.events]: eta: 7:00:57 iter: 59 total_loss: 2.088 loss_cls: 1.241 loss_box_reg: 0.848 time: 0.3911 data_time: 0.0074 lr: 0.001599 max_mem: 1699M [04/01 18:49:00 d2.utils.events]: eta: 7:06:05 iter: 79 total_loss: 2.109 loss_cls: 1.09 loss_box_reg: 0.96 time: 0.4055 data_time: 0.0057 lr: 0.0021319 max_mem: 1699M [04/01 18:49:07 d2.utils.events]: eta: 7:03:09 iter: 99 total_loss: 2.338 loss_cls: 1.33 loss_box_reg: 0.961 time: 0.3881 data_time: 0.0036 lr: 0.0026649 max_mem: 1699M ERROR [04/01 18:49:08 d2.engine.train_loop]: Exception during training: Traceback (most recent call last): File "/home/zy/anaconda3/envs/detectron2/lib/python3.7/site-packages/detectron2/engine/train_loop.py", line 140, in train self.run_step() File "/home/zy/anaconda3/envs/detectron2/lib/python3.7/site-packages/detectron2/engine/defaults.py", line 441, in run_step self._trainer.run_step() File "/home/zy/anaconda3/envs/detectron2/lib/python3.7/site-packages/detectron2/engine/train_loop.py", line 244, in run_step self._write_metrics(loss_dict, data_time) File "/home/zy/anaconda3/envs/detectron2/lib/python3.7/site-packages/detectron2/engine/train_loop.py", line 287, in _write_metrics f"Loss became infinite or NaN at iteration={self.iter}!\n" FloatingPointError: Loss became infinite or NaN at iteration=105! loss_dict = {'loss_cls': nan, 'loss_box_reg': 1.093152403831482} [04/01 18:49:08 d2.engine.hooks]: Overall training speed: 103 iterations in 0:00:39 (0.3871 s / it) [04/01 18:49:08 d2.engine.hooks]: Total training time: 0:00:40 (0:00:00 on hooks) [04/01 18:49:08 d2.utils.events]: eta: 7:02:48 iter: 105 total_loss: 2.338 loss_cls: 1.269 loss_box_reg: 0.8847 time: 0.3852 data_time: 0.0034 lr: 0.0027982 max_mem: 1699M Traceback (most recent call last): File "train_net.py", line 259, in args=(args,), File "/home/zy/anaconda3/envs/detectron2/lib/python3.7/site-packages/detectron2/engine/launch.py", line 82, in launch main_func(*args) File "train_net.py", line 246, in main return trainer.train() File "/home/zy/anaconda3/envs/detectron2/lib/python3.7/site-packages/detectron2/engine/defaults.py", line 431, in train super().train(self.start_iter, self.max_iter) File "/home/zy/anaconda3/envs/detectron2/lib/python3.7/site-packages/detectron2/engine/train_loop.py", line 140, in train self.run_step() File "/home/zy/anaconda3/envs/detectron2/lib/python3.7/site-packages/detectron2/engine/defaults.py", line 441, in run_step self._trainer.run_step() File "/home/zy/anaconda3/envs/detectron2/lib/python3.7/site-packages/detectron2/engine/train_loop.py", line 244, in run_step self._write_metrics(loss_dict, data_time) File "/home/zy/anaconda3/envs/detectron2/lib/python3.7/site-packages/detectron2/engine/train_loop.py", line 287, in _write_metrics f"Loss became infinite or NaN at iteration={self.iter}!\n" FloatingPointError: Loss became infinite or NaN at iteration=105! loss_dict = {'loss_cls': nan, 'loss_box_reg': 1.093152403831482}

zyrant commented 3 years ago

用的yolof_CSP_D_53_DC5_9x_stage2_3x.yaml,报错了,用yolof_R_50_C5_1x.yaml没有报错

chensnathan commented 3 years ago

Hi, it seems that you try to finetune YOLOF on a custom dataset with one GPU.

You may need to adjust the learning rate and the IMS_PER_BATCH according to the detectron2 tutorial.

Moreover, I'm not sure whether the hyper-parameters find in the COCO dataset are suitable for your dataset. You may need to adjust them by yourself.

zyrant commented 3 years ago

thank you for your reply, I will have a try.

ZY