facebookresearch / detectron2

Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
https://detectron2.readthedocs.io/en/latest/
Apache License 2.0
29.8k stars 7.38k forks source link

Segmentation fault happens during the training session #2958

Closed cnr0724 closed 3 years ago

cnr0724 commented 3 years ago

Instructions To Reproduce the 🐛 Bug:

  1. Full runnable code or full changes you made:
    
    If making changes to the project itself, please use output of the following command:
    git rev-parse HEAD; git diff
``` 2. What exact command you run: 3. __Full logs__ or other relevant observations: ``` WARNING [04/23 13:53:04 d2.config.compat]: Config '/home/nureechoi2200/CenterNet2/projects/CenterNet2/configs/CenterNet2_DLA-BiFPN-P5_640_24x_ST.yaml' has no VERSION. Assuming it to be compatible with latest v2. Loading pretrained DLA! [04/23 13:53:08 d2.engine.defaults]: Model: GeneralizedRCNN( (backbone): BiFPN( (bottom_up): BackboneWithTopLevels( (backbone): DLA( (base_layer): Sequential( (0): Conv2d(3, 16, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), bias=False) (1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) ) (level0): Sequential( (0): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) ) (level1): Sequential( (0): Conv2d(16, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) ) (level2): Tree( (tree1): BasicBlock( (conv1): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (tree2): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (root): Root( (conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) ) (downsample): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (project): Sequential( (0): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (level3): Tree( (tree1): Tree( (tree1): BasicBlock( (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (tree2): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (root): Root( (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) ) (downsample): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (project): Sequential( (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (tree2): Tree( (tree1): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (tree2): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (root): Root( (conv): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) ) ) (downsample): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (project): Sequential( (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (level4): Tree( (tree1): Tree( (tree1): BasicBlock( (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (tree2): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (root): Root( (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) ) (downsample): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (project): Sequential( (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (tree2): Tree( (tree1): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (tree2): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (root): Root( (conv): Conv2d(896, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) ) ) (downsample): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (project): Sequential( (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (level5): Tree( (tree1): BasicBlock( (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (tree2): BasicBlock( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (root): Root( (conv): Conv2d(1280, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) ) (downsample): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (project): Sequential( (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (dla6): FeatureMapResampler( (reduction): Conv2d( 512, 160, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) ) (dla7): FeatureMapResampler() ) (repeated_bifpn): ModuleList( (0): SingleBiFPN( (outputs_f3_3_4): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (lateral_2_f2): Conv2d( 512, 160, kernel_size=(1, 1), stride=(1, 1) (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f2_2_5): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (lateral_1_f1): Conv2d( 256, 160, kernel_size=(1, 1), stride=(1, 1) (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f1_1_6): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (lateral_0_f0): Conv2d( 128, 160, kernel_size=(1, 1), stride=(1, 1) (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f0_0_7): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f1_1_7_8): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f2_2_6_9): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f3_3_5_10): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f4_4_11): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) ) (1): SingleBiFPN( (outputs_f3_3_4): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f2_2_5): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f1_1_6): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f0_0_7): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f1_1_7_8): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f2_2_6_9): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f3_3_5_10): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f4_4_11): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) ) (2): SingleBiFPN( (outputs_f3_3_4): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f2_2_5): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f1_1_6): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f0_0_7): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f1_1_7_8): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f2_2_6_9): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f3_3_5_10): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) (outputs_f4_4_11): Conv2d( 160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): GroupNorm(32, 160, eps=1e-05, affine=True) ) ) ) ) (proposal_generator): CenterNet( (iou_loss): IOULoss() (centernet_head): CenterNetHead( (cls_tower): Sequential() (bbox_tower): Sequential( (0): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GroupNorm(32, 160, eps=1e-05, affine=True) (2): ReLU() (3): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (4): GroupNorm(32, 160, eps=1e-05, affine=True) (5): ReLU() (6): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (7): GroupNorm(32, 160, eps=1e-05, affine=True) (8): ReLU() (9): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (10): GroupNorm(32, 160, eps=1e-05, affine=True) (11): ReLU() ) (share_tower): Sequential() (bbox_pred): Conv2d(160, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (scales): ModuleList( (0): Scale() (1): Scale() (2): Scale() (3): Scale() (4): Scale() ) (agn_hm): Conv2d(160, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) ) (roi_heads): CustomCascadeROIHeads( (box_pooler): ROIPooler( (level_poolers): ModuleList( (0): ROIAlign(output_size=(7, 7), spatial_scale=0.125, sampling_ratio=0, aligned=True) (1): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True) (2): ROIAlign(output_size=(7, 7), spatial_scale=0.03125, sampling_ratio=0, aligned=True) (3): ROIAlign(output_size=(7, 7), spatial_scale=0.015625, sampling_ratio=0, aligned=True) (4): ROIAlign(output_size=(7, 7), spatial_scale=0.0078125, sampling_ratio=0, aligned=True) ) ) (box_head): ModuleList( (0): FastRCNNConvFCHead( (flatten): Flatten(start_dim=1, end_dim=-1) (fc1): Linear(in_features=7840, out_features=1024, bias=True) (fc_relu1): ReLU() (fc2): Linear(in_features=1024, out_features=1024, bias=True) (fc_relu2): ReLU() ) (1): FastRCNNConvFCHead( (flatten): Flatten(start_dim=1, end_dim=-1) (fc1): Linear(in_features=7840, out_features=1024, bias=True) (fc_relu1): ReLU() (fc2): Linear(in_features=1024, out_features=1024, bias=True) (fc_relu2): ReLU() ) (2): FastRCNNConvFCHead( (flatten): Flatten(start_dim=1, end_dim=-1) (fc1): Linear(in_features=7840, out_features=1024, bias=True) (fc_relu1): ReLU() (fc2): Linear(in_features=1024, out_features=1024, bias=True) (fc_relu2): ReLU() ) ) (box_predictor): ModuleList( (0): CustomFastRCNNOutputLayers( (cls_score): Linear(in_features=1024, out_features=11, bias=True) (bbox_pred): Linear(in_features=1024, out_features=4, bias=True) ) (1): CustomFastRCNNOutputLayers( (cls_score): Linear(in_features=1024, out_features=11, bias=True) (bbox_pred): Linear(in_features=1024, out_features=4, bias=True) ) (2): CustomFastRCNNOutputLayers( (cls_score): Linear(in_features=1024, out_features=11, bias=True) (bbox_pred): Linear(in_features=1024, out_features=4, bias=True) ) ) ) ) [04/23 13:53:11 d2.data.datasets.coco]: Loading /home/nureechoi2200/instances_train.json takes 2.31 seconds. [04/23 13:53:11 d2.data.datasets.coco]: Loaded 9344 images in COCO format from /home/nureechoi2200/instances_train.json [04/23 13:53:12 d2.data.build]: Removed 0 images with no usable annotations. 9344 images left. [04/23 13:53:12 d2.data.build]: Distribution of instances among all 11 categories: | category | #instances | category | #instances | category | #instances | |:----------:|:-------------|:-------------:|:-------------|:----------:|:-------------| | pedestrian | 115843 | people | 39860 | bicycle | 15346 | | car | 213465 | van | 36802 | truck | 18972 | | tricycle | 7079 | awning-tric.. | 4764 | bus | 8856 | | motor | 43758 | others | 2240 | | | | total | 506985 | | | | | [04/23 13:53:12 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()] [04/23 13:53:12 d2.data.build]: Using training sampler TrainingSampler [04/23 13:53:12 d2.data.common]: Serializing 9344 elements to byte tensors and concatenating them all ... [04/23 13:53:13 d2.data.common]: Serialized dataset takes 68.50 MiB WARNING [04/23 13:53:13 d2.solver.build]: SOLVER.STEPS contains values larger than SOLVER.MAX_ITER. These values will be ignored. [04/23 13:53:13 d2.engine.train_loop]: Starting training from iteration 0 /home/nureechoi2200/CenterNet2/projects/CenterNet2/centernet/modeling/dense_heads/centernet.py:567: UserWarning: This overload of nonzero is deprecated: nonzero() Consider using one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.) per_candidate_nonzeros = per_candidate_inds.nonzero() # n [04/23 13:53:28 d2.utils.events]: eta: 2:02:29 iter: 19 total_loss: 6.052 loss_cls_stage0: 1.62 loss_box_reg_stage0: 0.01732 loss_cls_stage1: 1.7 loss_box_reg_stage1: 0.0149 loss_cls_stage2: 1.355 loss_box_reg_stage2: 0.02087 loss_centernet_loc: 0.8989 loss_centernet_agn_pos: 0.4124 loss_centernet_agn_neg: 0.00431 time: 0.7542 data_time: 0.0171 lr: 0.0003999 max_mem: 14728M [04/23 13:53:43 d2.utils.events]: eta: 1:59:48 iter: 39 total_loss: 3.166 loss_cls_stage0: 0.6608 loss_box_reg_stage0: 0.02505 loss_cls_stage1: 0.6228 loss_box_reg_stage1: 0.02261 loss_cls_stage2: 0.6039 loss_box_reg_stage2: 0.0218 loss_centernet_loc: 0.8819 loss_centernet_agn_pos: 0.249 loss_centernet_agn_neg: 0.04085 time: 0.7354 data_time: 0.0075 lr: 0.0007998 max_mem: 14728M [04/23 13:53:57 d2.utils.events]: eta: 1:59:04 iter: 59 total_loss: 2.89 loss_cls_stage0: 0.6356 loss_box_reg_stage0: 0.0764 loss_cls_stage1: 0.5453 loss_box_reg_stage1: 0.03499 loss_cls_stage2: 0.5251 loss_box_reg_stage2: 0.0332 loss_centernet_loc: 0.7657 loss_centernet_agn_pos: 0.2225 loss_centernet_agn_neg: 0.03827 time: 0.7204 data_time: 0.0079 lr: 0.0011997 max_mem: 14728M [04/23 13:54:10 d2.utils.events]: eta: 1:56:49 iter: 79 total_loss: 2.734 loss_cls_stage0: 0.5741 loss_box_reg_stage0: 0.08455 loss_cls_stage1: 0.5104 loss_box_reg_stage1: 0.06171 loss_cls_stage2: 0.4916 loss_box_reg_stage2: 0.082 loss_centernet_loc: 0.6659 loss_centernet_agn_pos: 0.1962 loss_centernet_agn_neg: 0.03807 time: 0.7093 data_time: 0.0076 lr: 0.0015996 max_mem: 14728M [04/23 13:54:24 d2.utils.events]: eta: 1:54:37 iter: 99 total_loss: 2.627 loss_cls_stage0: 0.52 loss_box_reg_stage0: 0.1065 loss_cls_stage1: 0.4731 loss_box_reg_stage1: 0.06472 loss_cls_stage2: 0.4635 loss_box_reg_stage2: 0.08181 loss_centernet_loc: 0.6329 loss_centernet_agn_pos: 0.2319 loss_centernet_agn_neg: 0.02105 time: 0.7020 data_time: 0.0074 lr: 0.0019995 max_mem: 14728M [04/23 13:54:37 d2.utils.events]: eta: 1:53:48 iter: 119 total_loss: 2.679 loss_cls_stage0: 0.6001 loss_box_reg_stage0: 0.2266 loss_cls_stage1: 0.4671 loss_box_reg_stage1: 0.1147 loss_cls_stage2: 0.4019 loss_box_reg_stage2: 0.05074 loss_centernet_loc: 0.5668 loss_centernet_agn_pos: 0.1775 loss_centernet_agn_neg: 0.03934 time: 0.6983 data_time: 0.0075 lr: 0.0023994 max_mem: 14728M [04/23 13:54:51 d2.utils.events]: eta: 1:53:25 iter: 139 total_loss: 2.782 loss_cls_stage0: 0.6496 loss_box_reg_stage0: 0.2692 loss_cls_stage1: 0.4822 loss_box_reg_stage1: 0.1411 loss_cls_stage2: 0.4112 loss_box_reg_stage2: 0.06638 loss_centernet_loc: 0.5506 loss_centernet_agn_pos: 0.1595 loss_centernet_agn_neg: 0.03991 time: 0.6954 data_time: 0.0073 lr: 0.0027993 max_mem: 14728M [04/23 13:55:05 d2.utils.events]: eta: 1:53:04 iter: 159 total_loss: 2.764 loss_cls_stage0: 0.5942 loss_box_reg_stage0: 0.2964 loss_cls_stage1: 0.4844 loss_box_reg_stage1: 0.1787 loss_cls_stage2: 0.3954 loss_box_reg_stage2: 0.07791 loss_centernet_loc: 0.5494 loss_centernet_agn_pos: 0.1484 loss_centernet_agn_neg: 0.03629 time: 0.6935 data_time: 0.0074 lr: 0.0031992 max_mem: 14728M [04/23 13:55:18 d2.utils.events]: eta: 1:52:41 iter: 179 total_loss: 2.826 loss_cls_stage0: 0.5696 loss_box_reg_stage0: 0.3365 loss_cls_stage1: 0.4705 loss_box_reg_stage1: 0.2211 loss_cls_stage2: 0.3837 loss_box_reg_stage2: 0.1101 loss_centernet_loc: 0.5252 loss_centernet_agn_pos: 0.1775 loss_centernet_agn_neg: 0.0244 time: 0.6927 data_time: 0.0073 lr: 0.0035991 max_mem: 14728M [04/23 13:55:32 d2.utils.events]: eta: 1:52:12 iter: 199 total_loss: 2.95 loss_cls_stage0: 0.5469 loss_box_reg_stage0: 0.3446 loss_cls_stage1: 0.4451 loss_box_reg_stage1: 0.2927 loss_cls_stage2: 0.3453 loss_box_reg_stage2: 0.1722 loss_centernet_loc: 0.5231 loss_centernet_agn_pos: 0.1697 loss_centernet_agn_neg: 0.01975 time: 0.6911 data_time: 0.0072 lr: 0.003999 max_mem: 14728M [04/23 13:55:45 d2.utils.events]: eta: 1:51:54 iter: 219 total_loss: 3.157 loss_cls_stage0: 0.5719 loss_box_reg_stage0: 0.4072 loss_cls_stage1: 0.4827 loss_box_reg_stage1: 0.3626 loss_cls_stage2: 0.3782 loss_box_reg_stage2: 0.2195 loss_centernet_loc: 0.5031 loss_centernet_agn_pos: 0.1553 loss_centernet_agn_neg: 0.01981 time: 0.6900 data_time: 0.0073 lr: 0.0043989 max_mem: 14728M [04/23 13:55:59 d2.utils.events]: eta: 1:51:44 iter: 239 total_loss: 3.059 loss_cls_stage0: 0.538 loss_box_reg_stage0: 0.3783 loss_cls_stage1: 0.4579 loss_box_reg_stage1: 0.3894 loss_cls_stage2: 0.3628 loss_box_reg_stage2: 0.2283 loss_centernet_loc: 0.4648 loss_centernet_agn_pos: 0.1292 loss_centernet_agn_neg: 0.02834 time: 0.6894 data_time: 0.0068 lr: 0.0047988 max_mem: 14728M [04/23 13:56:13 d2.utils.events]: eta: 1:51:27 iter: 259 total_loss: 3.148 loss_cls_stage0: 0.5284 loss_box_reg_stage0: 0.3574 loss_cls_stage1: 0.4747 loss_box_reg_stage1: 0.4639 loss_cls_stage2: 0.3725 loss_box_reg_stage2: 0.2836 loss_centernet_loc: 0.4671 loss_centernet_agn_pos: 0.1569 loss_centernet_agn_neg: 0.01427 time: 0.6886 data_time: 0.0072 lr: 0.0051987 max_mem: 14728M [04/23 13:56:26 d2.utils.events]: eta: 1:51:00 iter: 279 total_loss: 3.2 loss_cls_stage0: 0.5476 loss_box_reg_stage0: 0.3523 loss_cls_stage1: 0.4751 loss_box_reg_stage1: 0.4024 loss_cls_stage2: 0.3456 loss_box_reg_stage2: 0.2731 loss_centernet_loc: 0.4982 loss_centernet_agn_pos: 0.1402 loss_centernet_agn_neg: 0.03393 time: 0.6873 data_time: 0.0071 lr: 0.0055986 max_mem: 14728M [04/23 13:56:40 d2.utils.events]: eta: 1:50:51 iter: 299 total_loss: 3.108 loss_cls_stage0: 0.485 loss_box_reg_stage0: 0.3705 loss_cls_stage1: 0.4247 loss_box_reg_stage1: 0.4972 loss_cls_stage2: 0.3597 loss_box_reg_stage2: 0.3391 loss_centernet_loc: 0.4464 loss_centernet_agn_pos: 0.1148 loss_centernet_agn_neg: 0.03805 time: 0.6872 data_time: 0.0068 lr: 0.0059985 max_mem: 14728M [04/23 13:56:54 d2.utils.events]: eta: 1:50:45 iter: 319 total_loss: 3.162 loss_cls_stage0: 0.513 loss_box_reg_stage0: 0.3887 loss_cls_stage1: 0.47 loss_box_reg_stage1: 0.476 loss_cls_stage2: 0.3626 loss_box_reg_stage2: 0.3535 loss_centernet_loc: 0.4228 loss_centernet_agn_pos: 0.1185 loss_centernet_agn_neg: 0.0322 time: 0.6871 data_time: 0.0070 lr: 0.0063984 max_mem: 14728M [04/23 13:57:07 d2.utils.events]: eta: 1:50:28 iter: 339 total_loss: 2.997 loss_cls_stage0: 0.471 loss_box_reg_stage0: 0.3816 loss_cls_stage1: 0.4225 loss_box_reg_stage1: 0.4657 loss_cls_stage2: 0.3607 loss_box_reg_stage2: 0.3155 loss_centernet_loc: 0.4251 loss_centernet_agn_pos: 0.1187 loss_centernet_agn_neg: 0.03719 time: 0.6865 data_time: 0.0075 lr: 0.0067983 max_mem: 14728M [04/23 13:57:21 d2.utils.events]: eta: 1:50:22 iter: 359 total_loss: 3.413 loss_cls_stage0: 0.5035 loss_box_reg_stage0: 0.2947 loss_cls_stage1: 0.492 loss_box_reg_stage1: 0.4403 loss_cls_stage2: 0.4239 loss_box_reg_stage2: 0.3746 loss_centernet_loc: 0.5064 loss_centernet_agn_pos: 0.198 loss_centernet_agn_neg: 0.0102 time: 0.6865 data_time: 0.0073 lr: 0.0071982 max_mem: 14728M [04/23 13:57:34 d2.utils.events]: eta: 1:50:01 iter: 379 total_loss: 3.235 loss_cls_stage0: 0.5073 loss_box_reg_stage0: 0.3303 loss_cls_stage1: 0.4648 loss_box_reg_stage1: 0.4745 loss_cls_stage2: 0.3805 loss_box_reg_stage2: 0.3874 loss_centernet_loc: 0.4567 loss_centernet_agn_pos: 0.1567 loss_centernet_agn_neg: 0.02206 time: 0.6859 data_time: 0.0072 lr: 0.0075981 max_mem: 14728M [04/23 13:57:48 d2.utils.events]: eta: 1:49:47 iter: 399 total_loss: 3.205 loss_cls_stage0: 0.5066 loss_box_reg_stage0: 0.32 loss_cls_stage1: 0.4696 loss_box_reg_stage1: 0.4593 loss_cls_stage2: 0.4092 loss_box_reg_stage2: 0.3839 loss_centernet_loc: 0.4646 loss_centernet_agn_pos: 0.1243 loss_centernet_agn_neg: 0.02291 time: 0.6858 data_time: 0.0073 lr: 0.007998 max_mem: 14728M [04/23 13:58:02 d2.utils.events]: eta: 1:49:41 iter: 419 total_loss: 3.138 loss_cls_stage0: 0.4728 loss_box_reg_stage0: 0.3509 loss_cls_stage1: 0.4442 loss_box_reg_stage1: 0.4668 loss_cls_stage2: 0.3927 loss_box_reg_stage2: 0.3932 loss_centernet_loc: 0.4418 loss_centernet_agn_pos: 0.1321 loss_centernet_agn_neg: 0.03348 time: 0.6858 data_time: 0.0072 lr: 0.0083979 max_mem: 14728M [04/23 13:58:15 d2.utils.events]: eta: 1:49:20 iter: 439 total_loss: 2.974 loss_cls_stage0: 0.4456 loss_box_reg_stage0: 0.3362 loss_cls_stage1: 0.3958 loss_box_reg_stage1: 0.4705 loss_cls_stage2: 0.3524 loss_box_reg_stage2: 0.384 loss_centernet_loc: 0.3835 loss_centernet_agn_pos: 0.1023 loss_centernet_agn_neg: 0.03541 time: 0.6856 data_time: 0.0073 lr: 0.0087978 max_mem: 14728M [04/23 13:58:29 d2.utils.events]: eta: 1:49:02 iter: 459 total_loss: 2.928 loss_cls_stage0: 0.4419 loss_box_reg_stage0: 0.367 loss_cls_stage1: 0.398 loss_box_reg_stage1: 0.4531 loss_cls_stage2: 0.3439 loss_box_reg_stage2: 0.3832 loss_centernet_loc: 0.405 loss_centernet_agn_pos: 0.1016 loss_centernet_agn_neg: 0.03912 time: 0.6854 data_time: 0.0074 lr: 0.0091977 max_mem: 14728M [04/23 13:58:43 d2.utils.events]: eta: 1:48:47 iter: 479 total_loss: 2.971 loss_cls_stage0: 0.4373 loss_box_reg_stage0: 0.3454 loss_cls_stage1: 0.4082 loss_box_reg_stage1: 0.4832 loss_cls_stage2: 0.3483 loss_box_reg_stage2: 0.433 loss_centernet_loc: 0.3976 loss_centernet_agn_pos: 0.1119 loss_centernet_agn_neg: 0.02877 time: 0.6852 data_time: 0.0075 lr: 0.0095976 max_mem: 14728M [04/23 13:58:56 d2.utils.events]: eta: 1:48:28 iter: 499 total_loss: 2.921 loss_cls_stage0: 0.4504 loss_box_reg_stage0: 0.3669 loss_cls_stage1: 0.3925 loss_box_reg_stage1: 0.4761 loss_cls_stage2: 0.3326 loss_box_reg_stage2: 0.395 loss_centernet_loc: 0.4009 loss_centernet_agn_pos: 0.09936 loss_centernet_agn_neg: 0.03187 time: 0.6850 data_time: 0.0073 lr: 0.0099975 max_mem: 14728M [04/23 13:59:10 d2.utils.events]: eta: 1:48:18 iter: 519 total_loss: 3.053 loss_cls_stage0: 0.4377 loss_box_reg_stage0: 0.3257 loss_cls_stage1: 0.4132 loss_box_reg_stage1: 0.4882 loss_cls_stage2: 0.3713 loss_box_reg_stage2: 0.4042 loss_centernet_loc: 0.4232 loss_centernet_agn_pos: 0.1411 loss_centernet_agn_neg: 0.01619 time: 0.6851 data_time: 0.0076 lr: 0.010397 max_mem: 14728M [04/23 13:59:24 d2.utils.events]: eta: 1:48:05 iter: 539 total_loss: 3.036 loss_cls_stage0: 0.4543 loss_box_reg_stage0: 0.374 loss_cls_stage1: 0.4175 loss_box_reg_stage1: 0.5028 loss_cls_stage2: 0.359 loss_box_reg_stage2: 0.3874 loss_centernet_loc: 0.4002 loss_centernet_agn_pos: 0.138 loss_centernet_agn_neg: 0.01314 time: 0.6849 data_time: 0.0074 lr: 0.010797 max_mem: 14728M [04/23 13:59:37 d2.utils.events]: eta: 1:47:52 iter: 559 total_loss: 3.016 loss_cls_stage0: 0.4179 loss_box_reg_stage0: 0.3438 loss_cls_stage1: 0.3911 loss_box_reg_stage1: 0.4936 loss_cls_stage2: 0.3474 loss_box_reg_stage2: 0.4274 loss_centernet_loc: 0.3711 loss_centernet_agn_pos: 0.1086 loss_centernet_agn_neg: 0.02994 time: 0.6849 data_time: 0.0072 lr: 0.011197 max_mem: 14728M [04/23 13:59:51 d2.utils.events]: eta: 1:47:39 iter: 579 total_loss: 2.89 loss_cls_stage0: 0.4035 loss_box_reg_stage0: 0.3489 loss_cls_stage1: 0.3669 loss_box_reg_stage1: 0.456 loss_cls_stage2: 0.3345 loss_box_reg_stage2: 0.3952 loss_centernet_loc: 0.3749 loss_centernet_agn_pos: 0.09851 loss_centernet_agn_neg: 0.03317 time: 0.6849 data_time: 0.0072 lr: 0.011597 max_mem: 14728M [04/23 14:00:05 d2.utils.events]: eta: 1:47:26 iter: 599 total_loss: 2.966 loss_cls_stage0: 0.4413 loss_box_reg_stage0: 0.3528 loss_cls_stage1: 0.4015 loss_box_reg_stage1: 0.5005 loss_cls_stage2: 0.3388 loss_box_reg_stage2: 0.4366 loss_centernet_loc: 0.3729 loss_centernet_agn_pos: 0.09329 loss_centernet_agn_neg: 0.03602 time: 0.6849 data_time: 0.0072 lr: 0.011997 max_mem: 14728M [04/23 14:00:19 d2.utils.events]: eta: 1:47:14 iter: 619 total_loss: 2.85 loss_cls_stage0: 0.4023 loss_box_reg_stage0: 0.3236 loss_cls_stage1: 0.3743 loss_box_reg_stage1: 0.4489 loss_cls_stage2: 0.3346 loss_box_reg_stage2: 0.3655 loss_centernet_loc: 0.3944 loss_centernet_agn_pos: 0.1042 loss_centernet_agn_neg: 0.02554 time: 0.6852 data_time: 0.0076 lr: 0.012397 max_mem: 14728M [04/23 14:00:32 d2.utils.events]: eta: 1:47:00 iter: 639 total_loss: 2.809 loss_cls_stage0: 0.4131 loss_box_reg_stage0: 0.3501 loss_cls_stage1: 0.3713 loss_box_reg_stage1: 0.4603 loss_cls_stage2: 0.3312 loss_box_reg_stage2: 0.3954 loss_centernet_loc: 0.3635 loss_centernet_agn_pos: 0.1024 loss_centernet_agn_neg: 0.03013 time: 0.6850 data_time: 0.0074 lr: 0.012797 max_mem: 14728M [04/23 14:00:46 d2.utils.events]: eta: 1:46:56 iter: 659 total_loss: 2.935 loss_cls_stage0: 0.4381 loss_box_reg_stage0: 0.3588 loss_cls_stage1: 0.4013 loss_box_reg_stage1: 0.4657 loss_cls_stage2: 0.3409 loss_box_reg_stage2: 0.391 loss_centernet_loc: 0.3764 loss_centernet_agn_pos: 0.1125 loss_centernet_agn_neg: 0.02722 time: 0.6851 data_time: 0.0072 lr: 0.013197 max_mem: 14728M [04/23 14:01:00 d2.utils.events]: eta: 1:46:44 iter: 679 total_loss: 2.759 loss_cls_stage0: 0.3758 loss_box_reg_stage0: 0.3442 loss_cls_stage1: 0.3346 loss_box_reg_stage1: 0.4716 loss_cls_stage2: 0.3069 loss_box_reg_stage2: 0.4057 loss_centernet_loc: 0.334 loss_centernet_agn_pos: 0.09138 loss_centernet_agn_neg: 0.0347 time: 0.6851 data_time: 0.0075 lr: 0.013597 max_mem: 14728M [04/23 14:01:14 d2.utils.events]: eta: 1:46:31 iter: 699 total_loss: 2.873 loss_cls_stage0: 0.4271 loss_box_reg_stage0: 0.3639 loss_cls_stage1: 0.3811 loss_box_reg_stage1: 0.4816 loss_cls_stage2: 0.3313 loss_box_reg_stage2: 0.4066 loss_centernet_loc: 0.3663 loss_centernet_agn_pos: 0.09208 loss_centernet_agn_neg: 0.03429 time: 0.6853 data_time: 0.0076 lr: 0.013997 max_mem: 14728M [04/23 14:01:27 d2.utils.events]: eta: 1:46:17 iter: 719 total_loss: 2.818 loss_cls_stage0: 0.4088 loss_box_reg_stage0: 0.3397 loss_cls_stage1: 0.3552 loss_box_reg_stage1: 0.463 loss_cls_stage2: 0.3122 loss_box_reg_stage2: 0.3946 loss_centernet_loc: 0.3592 loss_centernet_agn_pos: 0.108 loss_centernet_agn_neg: 0.0343 time: 0.6853 data_time: 0.0073 lr: 0.014396 max_mem: 14728M [04/23 14:01:41 d2.utils.events]: eta: 1:46:03 iter: 739 total_loss: 2.862 loss_cls_stage0: 0.417 loss_box_reg_stage0: 0.3606 loss_cls_stage1: 0.385 loss_box_reg_stage1: 0.4784 loss_cls_stage2: 0.3317 loss_box_reg_stage2: 0.3995 loss_centernet_loc: 0.3668 loss_centernet_agn_pos: 0.09324 loss_centernet_agn_neg: 0.0316 time: 0.6853 data_time: 0.0080 lr: 0.014796 max_mem: 14728M ``` 4. please simplify the steps as much as possible so they do not require additional resources to run, such as a private dataset. ## Expected behavior: Run as much as it is defined on configs. ## Environment: Provide your environment information using the following command: ``` ---------------------- ------------------------------------------------------------------------------- sys.platform linux Python 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] numpy 1.19.5 detectron2 0.4 @/home/nureechoi2200/.local/lib/python3.6/site-packages/detectron2 Compiler GCC 7.5 CUDA compiler CUDA 11.0 detectron2 arch flags 8.0 DETECTRON2_ENV_MODULE PyTorch 1.7.1+cu110 @/home/nureechoi2200/.local/lib/python3.6/site-packages/torch PyTorch debug build False GPU available True GPU 0 GeForce RTX 3090 (arch=8.6) CUDA_HOME /usr/local/cuda Pillow 8.2.0 torchvision 0.8.2+cu110 @/home/nureechoi2200/.local/lib/python3.6/site-packages/torchvision torchvision arch flags 3.5, 5.0, 6.0, 7.0, 7.5, 8.0 fvcore 0.1.5.post20210415 iopath 0.1.8 cv2 4.5.1 ---------------------- ------------------------------------------------------------------------------- PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f) - OpenMP 201511 (a.k.a. OpenMP 4.5) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 11.0 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80 - CuDNN 8.0.5 - Magma 2.5.2 - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, ``` If your issue looks like an installation issue / environment issue, please first try to solve it yourself with the instructions in https://detectron2.readthedocs.io/tutorials/install.html#common-installation-issues Sometimes it happens and sometimes it doesn't. I don't know what I could do.
cnr0724 commented 3 years ago

I figured it out that my computer was the problem. Sorry.

cnr0724 commented 3 years ago

뭐고

So this kind of things happen. I thought it was my computer, but I don't have much clue that it sure is. Why does this happen?

ppwwyyxx commented 3 years ago

We will not investigate issues we can't reproduce. If it's specific to an environment, the environment needs to be provided (e.g. as a docker container).

ff137 commented 3 years ago

@cnr0724 Do you not get an incompatibility error running the GTX 3090 (arch 8.6)? All the arch flags in your environment indicate it should only support up to 8.0.

We're trying to get a GTX 3090 working with detectron ourselves, and curious if those versions (detectron 0.4 + pytorch 1.7.1+cu110) actually supports the 3090.

cnr0724 commented 3 years ago

@mdebeer Yes, it did not. Did you change the TORCH_CUDA_ARCH_LIST flag? export TORCH_CUDA_ARCH_LIST=8.0 might help.

cnr0724 commented 3 years ago

I found out it was the problem on the CPU, so it doesn't have a relation with the code.