facebookresearch / detectron2

Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
https://detectron2.readthedocs.io/en/latest/
Apache License 2.0
30.02k stars 7.41k forks source link

Result verification failed! #2633

Closed Revist closed 3 years ago

Revist commented 3 years ago

Hi, I install pre-build detectron2 from commit 4841e70 then I run tests that are supplied:

./dev/run_inference_tests.sh 

At the end I get the message that the "Result verification failed!", nevertheless numbers seem close. Is this a serious error, what I can do about it?

  1. Full logs
    
    ========================================================================
    Configs to run:
    ./configs/quick_schedules/cascade_mask_rcnn_R_50_FPN_inference_acc_test.yaml ./configs/quick_schedules/fast_rcnn_R_50_FPN_inference_acc_test.yaml ./configs/quick_schedules/keypoint_rcnn_R_50_FPN_inference_acc_test.yaml ./configs/quick_schedules/mask_rcnn_R_50_C4_inference_acc_test.yaml ./configs/quick_schedules/mask_rcnn_R_50_DC5_inference_acc_test.yaml ./configs/quick_schedules/mask_rcnn_R_50_FPN_inference_acc_test.yaml ./configs/quick_schedules/panoptic_fpn_R_50_inference_acc_test.yaml ./configs/quick_schedules/retinanet_R_50_FPN_inference_acc_test.yaml ./configs/quick_schedules/rpn_R_50_FPN_inference_acc_test.yaml ./configs/quick_schedules/semantic_R_50_FPN_inference_acc_test.yaml
    ========================================================================
    ========================================================================
    Running ./configs/quick_schedules/cascade_mask_rcnn_R_50_FPN_inference_acc_test.yaml ...
    ========================================================================
    ** fvcore version of PathManager will be deprecated soon. **
    ** Please migrate to the version in iopath repo. **
    https://github.com/facebookresearch/iopath 

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

Command Line Args: Namespace(config_file='./configs/quick_schedules/cascade_mask_rcnn_R_50_FPN_inference_acc_test.yaml', dist_url='tcp://127.0.0.1:50155', eval_only=True, machine_rank=0, num_gpus=2, num_machines=1, opts=['OUTPUT_DIR', 'inference_test_output'], resume=False) fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

[02/17 20:32:21 detectron2]: Rank of current process: 0. World size: 2 [02/17 20:32:22 detectron2]: Environment info:


sys.platform linux Python 3.7.9 (default, Aug 31 2020, 12:42:55) [GCC 7.3.0] numpy 1.19.2 detectron2 0.3 @/disk1/cea/fja_detectron2_env/lib/python3.7/site-packages/detectron2 Compiler GCC 7.3 CUDA compiler CUDA 11.0 detectron2 arch flags 3.7, 5.0, 5.2, 6.0, 6.1, 7.0, 7.5, 8.0 DETECTRON2_ENV_MODULE PyTorch 1.7.1 @/disk1/cea/fja_detectron2_env/lib/python3.7/site-packages/torch PyTorch debug build False GPU available True GPU 0,1 Tesla V100-PCIE-16GB (arch=7.0) CUDA_HOME /usr/local/cuda Pillow 8.1.0 torchvision 0.8.2 @/disk1/cea/fja_detectron2_env/lib/python3.7/site-packages/torchvision torchvision arch flags 3.5, 5.0, 6.0, 7.0, 7.5, 8.0 fvcore 0.1.3.post20210213 cv2 3.4.2


PyTorch built with:

[02/17 20:32:22 detectron2]: Command line arguments: Namespace(config_file='./configs/quick_schedules/cascade_mask_rcnn_R_50_FPN_inference_acc_test.yaml', dist_url='tcp://127.0.0.1:50155', eval_only=True, machine_rank=0, num_gpus=2, num_machines=1, opts=['OUTPUT_DIR', 'inference_test_output'], resume=False) [02/17 20:32:22 detectron2]: Contents of args.config_file=./configs/quick_schedules/cascade_mask_rcnn_R_50_FPN_inference_acc_test.yaml: BASE: "../Misc/cascade_mask_rcnn_R_50_FPN_3x.yaml" MODEL: WEIGHTS: "detectron2://Misc/cascade_mask_rcnn_R_50_FPN_3x/144998488/model_final_480dd8.pkl" DATASETS: TEST: ("coco_2017_val_100",) TEST: EXPECTED_RESULTS: [["bbox", "AP", 50.18, 0.02], ["segm", "AP", 43.87, 0.02]]

[02/17 20:32:22 detectron2]: Running with full config: CUDNN_BENCHMARK: False DATALOADER: ASPECT_RATIO_GROUPING: True FILTER_EMPTY_ANNOTATIONS: True NUM_WORKERS: 4 REPEAT_THRESHOLD: 0.0 SAMPLER_TRAIN: TrainingSampler DATASETS: PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000 PRECOMPUTED_PROPOSAL_TOPK_TRAIN: 2000 PROPOSAL_FILES_TEST: () PROPOSAL_FILES_TRAIN: () TEST: ('coco_2017_val_100',) TRAIN: ('coco_2017_train',) GLOBAL: HACK: 1.0 INPUT: CROP: ENABLED: False SIZE: [0.9, 0.9] TYPE: relative_range FORMAT: BGR MASK_FORMAT: polygon MAX_SIZE_TEST: 1333 MAX_SIZE_TRAIN: 1333 MIN_SIZE_TEST: 800 MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800) MIN_SIZE_TRAIN_SAMPLING: choice RANDOM_FLIP: horizontal MODEL: ANCHOR_GENERATOR: ANGLES: [[-90, 0, 90]] ASPECT_RATIOS: [[0.5, 1.0, 2.0]] NAME: DefaultAnchorGenerator OFFSET: 0.0 SIZES: [[32], [64], [128], [256], [512]] BACKBONE: FREEZE_AT: 2 NAME: build_resnet_fpn_backbone DEVICE: cuda FPN: FUSE_TYPE: sum IN_FEATURES: ['res2', 'res3', 'res4', 'res5'] NORM: OUT_CHANNELS: 256 KEYPOINT_ON: False LOAD_PROPOSALS: False MASK_ON: True META_ARCHITECTURE: GeneralizedRCNN PANOPTIC_FPN: COMBINE: ENABLED: True INSTANCES_CONFIDENCE_THRESH: 0.5 OVERLAP_THRESH: 0.5 STUFF_AREA_LIMIT: 4096 INSTANCE_LOSS_WEIGHT: 1.0 PIXEL_MEAN: [103.53, 116.28, 123.675] PIXEL_STD: [1.0, 1.0, 1.0] PROPOSAL_GENERATOR: MIN_SIZE: 0 NAME: RPN RESNETS: DEFORM_MODULATED: False DEFORM_NUM_GROUPS: 1 DEFORM_ON_PER_STAGE: [False, False, False, False] DEPTH: 50 NORM: FrozenBN NUM_GROUPS: 1 OUT_FEATURES: ['res2', 'res3', 'res4', 'res5'] RES2_OUT_CHANNELS: 256 RES5_DILATION: 1 STEM_OUT_CHANNELS: 64 STRIDE_IN_1X1: True WIDTH_PER_GROUP: 64 RETINANET: BBOX_REG_LOSS_TYPE: smooth_l1 BBOX_REG_WEIGHTS: (1.0, 1.0, 1.0, 1.0) FOCAL_LOSS_ALPHA: 0.25 FOCAL_LOSS_GAMMA: 2.0 IN_FEATURES: ['p3', 'p4', 'p5', 'p6', 'p7'] IOU_LABELS: [0, -1, 1] IOU_THRESHOLDS: [0.4, 0.5] NMS_THRESH_TEST: 0.5 NORM: NUM_CLASSES: 80 NUM_CONVS: 4 PRIOR_PROB: 0.01 SCORE_THRESH_TEST: 0.05 SMOOTH_L1_LOSS_BETA: 0.1 TOPK_CANDIDATES_TEST: 1000 ROI_BOX_CASCADE_HEAD: BBOX_REG_WEIGHTS: ((10.0, 10.0, 5.0, 5.0), (20.0, 20.0, 10.0, 10.0), (30.0, 30.0, 15.0, 15.0)) IOUS: (0.5, 0.6, 0.7) ROI_BOX_HEAD: BBOX_REG_LOSS_TYPE: smooth_l1 BBOX_REG_LOSS_WEIGHT: 1.0 BBOX_REG_WEIGHTS: (10.0, 10.0, 5.0, 5.0) CLS_AGNOSTIC_BBOX_REG: True CONV_DIM: 256 FC_DIM: 1024 NAME: FastRCNNConvFCHead NORM: NUM_CONV: 0 NUM_FC: 2 POOLER_RESOLUTION: 7 POOLER_SAMPLING_RATIO: 0 POOLER_TYPE: ROIAlignV2 SMOOTH_L1_BETA: 0.0 TRAIN_ON_PRED_BOXES: False ROI_HEADS: BATCH_SIZE_PER_IMAGE: 512 IN_FEATURES: ['p2', 'p3', 'p4', 'p5'] IOU_LABELS: [0, 1] IOU_THRESHOLDS: [0.5] NAME: CascadeROIHeads NMS_THRESH_TEST: 0.5 NUM_CLASSES: 80 POSITIVE_FRACTION: 0.25 PROPOSAL_APPEND_GT: True SCORE_THRESH_TEST: 0.05 ROI_KEYPOINT_HEAD: CONV_DIMS: (512, 512, 512, 512, 512, 512, 512, 512) LOSS_WEIGHT: 1.0 MIN_KEYPOINTS_PER_IMAGE: 1 NAME: KRCNNConvDeconvUpsampleHead NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS: True NUM_KEYPOINTS: 17 POOLER_RESOLUTION: 14 POOLER_SAMPLING_RATIO: 0 POOLER_TYPE: ROIAlignV2 ROI_MASK_HEAD: CLS_AGNOSTIC_MASK: False CONV_DIM: 256 NAME: MaskRCNNConvUpsampleHead NORM: NUM_CONV: 4 POOLER_RESOLUTION: 14 POOLER_SAMPLING_RATIO: 0 POOLER_TYPE: ROIAlignV2 RPN: BATCH_SIZE_PER_IMAGE: 256 BBOX_REG_LOSS_TYPE: smooth_l1 BBOX_REG_LOSS_WEIGHT: 1.0 BBOX_REG_WEIGHTS: (1.0, 1.0, 1.0, 1.0) BOUNDARY_THRESH: -1 HEAD_NAME: StandardRPNHead IN_FEATURES: ['p2', 'p3', 'p4', 'p5', 'p6'] IOU_LABELS: [0, -1, 1] IOU_THRESHOLDS: [0.3, 0.7] LOSS_WEIGHT: 1.0 NMS_THRESH: 0.7 POSITIVE_FRACTION: 0.5 POST_NMS_TOPK_TEST: 1000 POST_NMS_TOPK_TRAIN: 2000 PRE_NMS_TOPK_TEST: 1000 PRE_NMS_TOPK_TRAIN: 2000 SMOOTH_L1_BETA: 0.0 SEM_SEG_HEAD: COMMON_STRIDE: 4 CONVS_DIM: 128 IGNORE_VALUE: 255 IN_FEATURES: ['p2', 'p3', 'p4', 'p5'] LOSS_WEIGHT: 1.0 NAME: SemSegFPNHead NORM: GN NUM_CLASSES: 54 WEIGHTS: detectron2://Misc/cascade_mask_rcnn_R_50_FPN_3x/144998488/model_final_480dd8.pkl OUTPUT_DIR: inference_test_output SEED: -1 SOLVER: AMP: ENABLED: False BASE_LR: 0.02 BIAS_LR_FACTOR: 1.0 CHECKPOINT_PERIOD: 5000 CLIP_GRADIENTS: CLIP_TYPE: value CLIP_VALUE: 1.0 ENABLED: False NORM_TYPE: 2.0 GAMMA: 0.1 IMS_PER_BATCH: 16 LR_SCHEDULER_NAME: WarmupMultiStepLR MAX_ITER: 270000 MOMENTUM: 0.9 NESTEROV: False REFERENCE_WORLD_SIZE: 0 STEPS: (210000, 250000) WARMUP_FACTOR: 0.001 WARMUP_ITERS: 1000 WARMUP_METHOD: linear WEIGHT_DECAY: 0.0001 WEIGHT_DECAY_BIAS: 0.0001 WEIGHT_DECAY_NORM: 0.0 TEST: AUG: ENABLED: False FLIP: True MAX_SIZE: 4000 MIN_SIZES: (400, 500, 600, 700, 800, 900, 1000, 1100, 1200) DETECTIONS_PER_IMAGE: 100 EVAL_PERIOD: 0 EXPECTED_RESULTS: [['bbox', 'AP', 50.18, 0.02], ['segm', 'AP', 43.87, 0.02]] KEYPOINT_OKS_SIGMAS: [] PRECISE_BN: ENABLED: False NUM_ITER: 200 VERSION: 2 VIS_PERIOD: 0 [02/17 20:32:22 detectron2]: Full config saved to inference_test_output/config.yaml [02/17 20:32:22 d2.utils.env]: Using a generated random seed 22294976

GeneralizedRCNN( (backbone): FPN( (fpn_lateral2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (top_block): LastLevelMaxPool() (bottom_up): ResNet( (stem): BasicStem( (conv1): Conv2d( 3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) ) (res2): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv1): Conv2d( 64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) ) (res3): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv1): Conv2d( 256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) ) (res4): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) (conv1): Conv2d( 512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (4): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (5): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) ) (res5): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) (conv1): Conv2d( 1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) ) ) ) (proposal_generator): RPN( (rpn_head): StandardRPNHead( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (objectness_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1)) (anchor_deltas): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1)) ) (anchor_generator): DefaultAnchorGenerator( (cell_anchors): BufferList() ) ) (roi_heads): CascadeROIHeads( (box_pooler): ROIPooler( (level_poolers): ModuleList( (0): ROIAlign(output_size=(7, 7), spatial_scale=0.25, sampling_ratio=0, aligned=True) (1): ROIAlign(output_size=(7, 7), spatial_scale=0.125, sampling_ratio=0, aligned=True) (2): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True) (3): ROIAlign(output_size=(7, 7), spatial_scale=0.03125, sampling_ratio=0, aligned=True) ) ) (box_head): ModuleList( (0): FastRCNNConvFCHead( (flatten): Flatten(start_dim=1, end_dim=-1) (fc1): Linear(in_features=12544, out_features=1024, bias=True) (fc_relu1): ReLU() (fc2): Linear(in_features=1024, out_features=1024, bias=True) (fc_relu2): ReLU() ) (1): FastRCNNConvFCHead( (flatten): Flatten(start_dim=1, end_dim=-1) (fc1): Linear(in_features=12544, out_features=1024, bias=True) (fc_relu1): ReLU() (fc2): Linear(in_features=1024, out_features=1024, bias=True) (fc_relu2): ReLU() ) (2): FastRCNNConvFCHead( (flatten): Flatten(start_dim=1, end_dim=-1) (fc1): Linear(in_features=12544, out_features=1024, bias=True) (fc_relu1): ReLU() (fc2): Linear(in_features=1024, out_features=1024, bias=True) (fc_relu2): ReLU() ) ) (box_predictor): ModuleList( (0): FastRCNNOutputLayers( (cls_score): Linear(in_features=1024, out_features=81, bias=True) (bbox_pred): Linear(in_features=1024, out_features=4, bias=True) ) (1): FastRCNNOutputLayers( (cls_score): Linear(in_features=1024, out_features=81, bias=True) (bbox_pred): Linear(in_features=1024, out_features=4, bias=True) ) (2): FastRCNNOutputLayers( (cls_score): Linear(in_features=1024, out_features=81, bias=True) (bbox_pred): Linear(in_features=1024, out_features=4, bias=True) ) ) (mask_pooler): ROIPooler( (level_poolers): ModuleList( (0): ROIAlign(output_size=(14, 14), spatial_scale=0.25, sampling_ratio=0, aligned=True) (1): ROIAlign(output_size=(14, 14), spatial_scale=0.125, sampling_ratio=0, aligned=True) (2): ROIAlign(output_size=(14, 14), spatial_scale=0.0625, sampling_ratio=0, aligned=True) (3): ROIAlign(output_size=(14, 14), spatial_scale=0.03125, sampling_ratio=0, aligned=True) ) ) (mask_head): MaskRCNNConvUpsampleHead( (mask_fcn1): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1) (activation): ReLU() ) (mask_fcn2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1) (activation): ReLU() ) (mask_fcn3): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1) (activation): ReLU() ) (mask_fcn4): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1) (activation): ReLU() ) (deconv): ConvTranspose2d(256, 256, kernel_size=(2, 2), stride=(2, 2)) (deconv_relu): ReLU() (predictor): Conv2d(256, 80, kernel_size=(1, 1), stride=(1, 1)) ) ) ) [02/17 20:32:23 fvcore.common.checkpoint]: Loading checkpoint from detectron2://Misc/cascade_mask_rcnn_R_50_FPN_3x/144998488/model_final_480dd8.pkl [02/17 20:32:23 fvcore.common.checkpoint]: Reading a file from 'Detectron2 Model Zoo' [02/17 20:32:23 d2.data.datasets.coco]: Loaded 100 images in COCO format from /disk1/fja/datasets/coco/annotations/instances_val2017_100.json [02/17 20:32:23 d2.data.build]: Distribution of instances among all 80 categories: category #instances category #instances category #instances
person 341 bicycle 10 car 51
motorcycle 23 airplane 0 bus 10
train 2 truck 4 boat 13
traffic light 9 fire hydrant 5 stop sign 1
parking meter 0 bench 14 bird 13
cat 2 dog 4 horse 5
sheep 0 cow 1 elephant 3
bear 6 zebra 16 giraffe 2
backpack 8 umbrella 8 handbag 12
tie 11 suitcase 0 frisbee 9
skis 8 snowboard 1 sports ball 4
kite 19 baseball bat 2 baseball gl.. 1
skateboard 1 surfboard 3 tennis racket 9
bottle 16 wine glass 5 cup 15
fork 2 knife 0 spoon 1
bowl 7 banana 1 apple 2
sandwich 2 orange 0 broccoli 9
carrot 4 hot dog 0 pizza 4
donut 7 cake 0 chair 47
couch 8 potted plant 1 bed 2
dining table 17 toilet 2 tv 3
laptop 3 mouse 3 remote 4
keyboard 2 cell phone 8 microwave 1
oven 2 toaster 0 sink 2
refrigerator 1 book 19 clock 3
vase 5 scissors 0 teddy bear 2
hair drier 0 toothbrush 0
total 841

[02/17 20:32:23 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')] [02/17 20:32:23 d2.data.common]: Serializing 100 elements to byte tensors and concatenating them all ... [02/17 20:32:23 d2.data.common]: Serialized dataset takes 0.46 MiB WARNING [02/17 20:32:23 d2.evaluation.coco_evaluation]: COCO Evaluator instantiated using config, this is deprecated behavior. Please pass tasks in directly [02/17 20:32:23 d2.evaluation.evaluator]: Start inference on 50 images fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

fvcore version of PathManager will be deprecated soon. Please migrate to the version in iopath repo. https://github.com/facebookresearch/iopath

/disk1/cea/fja_detectron2_env/lib/python3.7/site-packages/detectron2/modeling/roi_heads/fast_rcnn.py:124: UserWarning: This overload of nonzero is deprecated: nonzero() Consider using one of the following signatures instead: nonzero(, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1607370156314/work/torch/csrc/utils/python_arg_parser.cpp:882.) filter_inds = filter_mask.nonzero() /disk1/cea/fja_detectron2_env/lib/python3.7/site-packages/detectron2/modeling/roi_heads/fast_rcnn.py:124: UserWarning: This overload of nonzero is deprecated: nonzero() Consider using one of the following signatures instead: nonzero(, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1607370156314/work/torch/csrc/utils/python_arg_parser.cpp:882.) filter_inds = filter_mask.nonzero() [02/17 20:32:29 d2.evaluation.evaluator]: Inference done 11/50. 0.0526 s / img. ETA=0:00:02 [02/17 20:32:32 d2.evaluation.evaluator]: Total inference time: 0:00:03.364090 (0.074758 s / img per device, on 2 devices) [02/17 20:32:32 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:02 (0.054453 s / img per device, on 2 devices) [02/17 20:32:32 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [02/17 20:32:32 d2.evaluation.coco_evaluation]: Saving results to inference_test_output/inference/coco_instances_results.json [02/17 20:32:32 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox COCOeval_opt.evaluate() finished in 0.10 seconds. Accumulating evaluation results... COCOeval_opt.accumulate() finished in 0.15 seconds. Average Precision (AP) @[ IoU=0.50:0.95 area= all maxDets=100 ] = 0.499 Average Precision (AP) @[ IoU=0.50 area= all maxDets=100 ] = 0.659 Average Precision (AP) @[ IoU=0.75 area= all maxDets=100 ] = 0.526 Average Precision (AP) @[ IoU=0.50:0.95 area= small maxDets=100 ] = 0.303 Average Precision (AP) @[ IoU=0.50:0.95 area=medium maxDets=100 ] = 0.505 Average Precision (AP) @[ IoU=0.50:0.95 area= large maxDets=100 ] = 0.681 Average Recall (AR) @[ IoU=0.50:0.95 area= all maxDets= 1 ] = 0.397 Average Recall (AR) @[ IoU=0.50:0.95 area= all maxDets= 10 ] = 0.588 Average Recall (AR) @[ IoU=0.50:0.95 area= all maxDets=100 ] = 0.609 Average Recall (AR) @[ IoU=0.50:0.95 area= small maxDets=100 ] = 0.387 Average Recall (AR) @[ IoU=0.50:0.95 area=medium maxDets=100 ] = 0.605 Average Recall (AR) @[ IoU=0.50:0.95 area= large maxDets=100 ] = 0.772 [02/17 20:32:32 d2.evaluation.coco_evaluation]: Evaluation results for bbox: AP AP50 AP75 APs APm APl
49.920 65.888 52.609 30.324 50.519 68.067
[02/17 20:32:32 d2.evaluation.coco_evaluation]: Per-category bbox AP: category AP category AP category AP
person 60.942 bicycle 30.363 car 48.143
motorcycle 47.479 airplane nan bus 68.329
train 92.525 truck 22.606 boat 32.373
traffic light 7.905 fire hydrant 60.726 stop sign 22.500
parking meter nan bench 53.690 bird 46.648
cat 80.198 dog 23.168 horse 81.628
sheep nan cow 30.000 elephant 91.122
bear 65.842 zebra 67.720 giraffe 100.000
backpack 35.645 umbrella 57.483 handbag 36.216
tie 31.249 suitcase nan frisbee 51.706
skis 49.841 snowboard 10.000 sports ball 69.307
kite 67.857 baseball bat 30.149 baseball glove 90.000
skateboard 50.000 surfboard 31.023 tennis racket 23.317
bottle 23.160 wine glass 62.822 cup 38.350
fork 22.723 knife nan spoon 0.000
bowl 1.357 banana 100.000 apple 85.050
sandwich 75.050 orange nan broccoli 42.580
carrot 16.825 hot dog nan pizza 36.881
donut 40.495 cake nan chair 35.481
couch 44.054 potted plant 45.000 bed 63.696
dining table 30.057 toilet 65.347 tv 83.366
laptop 93.267 mouse 48.119 remote 20.842
keyboard 65.248 cell phone 16.395 microwave 100.000
oven 50.495 toaster nan sink 45.446
refrigerator 100.000 book 28.410 clock 33.564
vase 17.755 scissors nan teddy bear 95.050
hair drier nan toothbrush nan
Loading and preparing results... DONE (t=0.03s) creating index... index created! Running per image evaluation... Evaluate annotation type segm COCOeval_opt.evaluate() finished in 0.15 seconds. Accumulating evaluation results... COCOeval_opt.accumulate() finished in 0.15 seconds. Average Precision (AP) @[ IoU=0.50:0.95 area= all maxDets=100 ] = 0.439 Average Precision (AP) @[ IoU=0.50 area= all maxDets=100 ] = 0.630 Average Precision (AP) @[ IoU=0.75 area= all maxDets=100 ] = 0.451 Average Precision (AP) @[ IoU=0.50:0.95 area= small maxDets=100 ] = 0.244 Average Precision (AP) @[ IoU=0.50:0.95 area=medium maxDets=100 ] = 0.486 Average Precision (AP) @[ IoU=0.50:0.95 area= large maxDets=100 ] = 0.614 Average Recall (AR) @[ IoU=0.50:0.95 area= all maxDets= 1 ] = 0.356 Average Recall (AR) @[ IoU=0.50:0.95 area= all maxDets= 10 ] = 0.508 Average Recall (AR) @[ IoU=0.50:0.95 area= all maxDets=100 ] = 0.523 Average Recall (AR) @[ IoU=0.50:0.95 area= small maxDets=100 ] = 0.312 Average Recall (AR) @[ IoU=0.50:0.95 area=medium maxDets=100 ] = 0.549 Average Recall (AR) @[ IoU=0.50:0.95 area= large maxDets=100 ] = 0.673 [02/17 20:32:33 d2.evaluation.coco_evaluation]: Evaluation results for segm: AP AP50 AP75 APs APm APl
43.892 63.042 45.078 24.402 48.555 61.388
[02/17 20:32:33 d2.evaluation.coco_evaluation]: Per-category segm AP: category AP category AP category AP
person 49.552 bicycle 14.158 car 41.887
motorcycle 35.845 airplane nan bus 65.695
train 90.000 truck 17.702 boat 37.726
traffic light 8.717 fire hydrant 57.343 stop sign 22.500
parking meter nan bench 36.640 bird 40.181
cat 45.347 dog 20.594 horse 77.189
sheep nan cow 0.000 elephant 69.901
bear 59.228 zebra 55.520 giraffe 80.000
backpack 20.250 umbrella 61.525 handbag 32.072
tie 21.683 suitcase nan frisbee 49.109
skis 15.578 snowboard 0.000 sports ball 71.386
kite 47.841 baseball bat 47.525 baseball glove 90.000
skateboard 20.000 surfboard 10.099 tennis racket 28.243
bottle 22.590 wine glass 69.327 cup 39.779
fork 28.548 knife nan spoon 0.000
bowl 1.855 banana 100.000 apple 92.525
sandwich 55.050 orange nan broccoli 44.850
carrot 5.919 hot dog nan pizza 30.495
donut 43.333 cake nan chair 21.731
couch 29.625 potted plant 40.000 bed 58.746
dining table 15.915 toilet 55.347 tv 85.545
laptop 91.683 mouse 54.752 remote 8.911
keyboard 65.248 cell phone 15.084 microwave 100.000
oven 50.495 toaster nan sink 45.446
refrigerator 100.000 book 24.853 clock 36.832
vase 14.120 scissors nan teddy bear 95.050
hair drier nan toothbrush nan

[02/17 20:32:33 d2.engine.defaults]: Evaluation results for coco_2017_val_100 in csv format: [02/17 20:32:33 d2.evaluation.testing]: copypaste: Task: bbox [02/17 20:32:33 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [02/17 20:32:33 d2.evaluation.testing]: copypaste: 49.9204,65.8877,52.6087,30.3239,50.5193,68.0673 [02/17 20:32:33 d2.evaluation.testing]: copypaste: Task: segm [02/17 20:32:33 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [02/17 20:32:33 d2.evaluation.testing]: copypaste: 43.8924,63.0419,45.0784,24.4019,48.5554,61.3880 ERROR [02/17 20:32:33 d2.evaluation.testing]: Result verification failed! ERROR [02/17 20:32:33 d2.evaluation.testing]: Expected Results: [['bbox', 'AP', 50.18, 0.02], ['segm', 'AP', 43.87, 0.02]] ERROR [02/17 20:32:33 d2.evaluation.testing]: Actual Results: OrderedDict([('bbox', {'AP': 49.920350939963164, 'AP-airplane': nan, 'AP-apple': 85.04950495049505, 'AP-backpack': 35.64522838838506, 'AP-banana': 100.0, 'AP-baseball bat': 30.148514851485146, 'AP-baseball glove': 90.0, 'AP-bear': 65.84158415841584, 'AP-bed': 63.69636963696369, 'AP-bench': 53.68976897689769, 'AP-bicycle': 30.36303630363036, 'AP-bird': 46.648486826704655, 'AP-boat': 32.37270718022028, 'AP-book': 28.40984335033177, 'AP-bottle': 23.160059802972775, 'AP-bowl': 1.3574778530484624, 'AP-broccoli': 42.58048090523339, 'AP-bus': 68.32940436900833, 'AP-cake': nan, 'AP-car': 48.14264993062022, 'AP-carrot': 16.825064859427115, 'AP-cat': 80.19801980198021, 'AP-cell phone': 16.394874781595806, 'AP-chair': 35.48141572129395, 'AP-clock': 33.56435643564357, 'AP-couch': 44.05382267550063, 'AP-cow': 30.0, 'AP-cup': 38.34999509340605, 'AP-dining table': 30.056930693069305, 'AP-dog': 23.16831683168317, 'AP-donut': 40.4950495049505, 'AP-elephant': 91.12211221122112, 'AP-fire hydrant': 60.72607260726072, 'AP-fork': 22.722772277227723, 'AP-frisbee': 51.706270627062715, 'AP-giraffe': 100.0, 'AP-hair drier': nan, 'AP-handbag': 36.21640228538983, 'AP-horse': 81.62816281628163, 'AP-hot dog': nan, 'AP-keyboard': 65.24752475247524, 'AP-kite': 67.85696297256933, 'AP-knife': nan, 'AP-laptop': 93.26732673267327, 'AP-microwave': 100.0, 'AP-motorcycle': 47.4788905958265, 'AP-mouse': 48.118811881188115, 'AP-orange': nan, 'AP-oven': 50.495049504950494, 'AP-parking meter': nan, 'AP-person': 60.942174903241465, 'AP-pizza': 36.88118811881188, 'AP-potted plant': 45.0, 'AP-refrigerator': 100.0, 'AP-remote': 20.84158415841584, 'AP-sandwich': 75.04950495049505, 'AP-scissors': nan, 'AP-sheep': nan, 'AP-sink': 45.445544554455445, 'AP-skateboard': 50.0, 'AP-skis': 49.840826187881945, 'AP-snowboard': 10.0, 'AP-spoon': 0.0, 'AP-sports ball': 69.30693069306929, 'AP-stop sign': 22.5, 'AP-suitcase': nan, 'AP-surfboard': 31.023102310231028, 'AP-teddy bear': 95.04950495049505, 'AP-tennis racket': 23.316831683168317, 'AP-tie': 31.24920153305653, 'AP-toaster': nan, 'AP-toilet': 65.34653465346535, 'AP-toothbrush': nan, 'AP-traffic light': 7.904950495049505, 'AP-train': 92.52475247524752, 'AP-truck': 22.60587403278143, 'AP-tv': 83.36633663366337, 'AP-umbrella': 57.48349834983498, 'AP-vase': 17.754596888260252, 'AP-wine glass': 62.82178217821782, 'AP-zebra': 67.7198210205636, 'AP50': 65.88766003301699, 'AP75': 52.608677824642214, 'APl': 68.0673159504408, 'APm': 50.51933983719206, 'APs': 30.32386761353462}), ('segm', {'AP': 43.89242383599268, 'AP-airplane': nan, 'AP-apple': 92.52475247524752, 'AP-backpack': 20.250159469728484, 'AP-banana': 100.0, 'AP-baseball bat': 47.524752475247524, 'AP-baseball glove': 90.0, 'AP-bear': 59.22772277227723, 'AP-bed': 58.74587458745874, 'AP-bench': 36.639738973897394, 'AP-bicycle': 14.158415841584157, 'AP-bird': 40.18079280455518, 'AP-boat': 37.72597740543285, 'AP-book': 24.852788190109166, 'AP-bottle': 22.58962587988122, 'AP-bowl': 1.8551328817092232, 'AP-broccoli': 44.84978783592645, 'AP-bus': 65.6954266855257, 'AP-cake': nan, 'AP-car': 41.886982690685244, 'AP-carrot': 5.918886006247684, 'AP-cat': 45.34653465346535, 'AP-cell phone': 15.08363731109953, 'AP-chair': 21.73077102515834, 'AP-clock': 36.83168316831683, 'AP-couch': 29.625176803394627, 'AP-cow': 0.0, 'AP-cup': 39.779037997696484, 'AP-dining table': 15.915120923857087, 'AP-dog': 20.594059405940595, 'AP-donut': 43.333333333333336, 'AP-elephant': 69.9009900990099, 'AP-fire hydrant': 57.343234323432334, 'AP-fork': 28.547854785478542, 'AP-frisbee': 49.10891089108911, 'AP-giraffe': 80.0, 'AP-hair drier': nan, 'AP-handbag': 32.07158319518587, 'AP-horse': 77.18921892189219, 'AP-hot dog': nan, 'AP-keyboard': 65.24752475247524, 'AP-kite': 47.84123636643344, 'AP-knife': nan, 'AP-laptop': 91.68316831683168, 'AP-microwave': 100.0, 'AP-motorcycle': 35.844918503349675, 'AP-mouse': 54.75247524752476, 'AP-orange': nan, 'AP-oven': 50.495049504950494, 'AP-parking meter': nan, 'AP-person': 49.55220687541312, 'AP-pizza': 30.495049504950494, 'AP-potted plant': 40.0, 'AP-refrigerator': 100.0, 'AP-remote': 8.91089108910891, 'AP-sandwich': 55.049504950495056, 'AP-scissors': nan, 'AP-sheep': nan, 'AP-sink': 45.445544554455445, 'AP-skateboard': 20.0, 'AP-skis': 15.577557755775578, 'AP-snowboard': 0.0, 'AP-spoon': 0.0, 'AP-sports ball': 71.38613861386138, 'AP-stop sign': 22.5, 'AP-suitcase': nan, 'AP-surfboard': 10.099009900990099, 'AP-teddy bear': 95.04950495049505, 'AP-tennis racket': 28.24257425742574, 'AP-tie': 21.683168316831683, 'AP-toaster': nan, 'AP-toilet': 55.346534653465355, 'AP-toothbrush': nan, 'AP-traffic light': 8.716831683168316, 'AP-train': 90.0, 'AP-truck': 17.701971877860053, 'AP-tv': 85.54455445544555, 'AP-umbrella': 61.52538715410002, 'AP-vase': 14.11951909476662, 'AP-wine glass': 69.32673267326733, 'AP-zebra': 55.519801980198025, 'AP50': 63.0419077993371, 'AP75': 45.078445807465265, 'APl': 61.387986043486045, 'APm': 48.55535880471486, 'APs': 24.401875736756615})]) Traceback (most recent call last): File "tools/train_net.py", line 169, in args=(args,), File "/disk1/cea/fja_detectron2_env/lib/python3.7/site-packages/detectron2/engine/launch.py", line 59, in launch daemon=False, File "/disk1/cea/fja_detectron2_env/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 199, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/disk1/cea/fja_detectron2_env/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 157, in start_processes while not context.join(): File "/disk1/cea/fja_detectron2_env/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 112, in join (error_index, exitcode) Exception: process 0 terminated with exit code 1


## Environment:

sys.platform linux Python 3.7.9 (default, Aug 31 2020, 12:42:55) [GCC 7.3.0] numpy 1.19.2 detectron2 0.3 @/disk1/fja/detectron2/detectron2 detectron2._C failed to import. detectron2 is not built correctly Compiler c++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44) CUDA compiler Build cuda_11.1.TC455_06.29190527_0 DETECTRON2_ENV_MODULE PyTorch 1.7.1 @/disk1/cea/fja_detectron2_env/lib/python3.7/site-packages/torch PyTorch debug build False GPU available True GPU 0,1 Tesla V100-PCIE-16GB (arch=7.0) CUDA_HOME /usr/local/cuda Pillow 8.1.0 torchvision 0.8.2 @/disk1/cea/fja_detectron2_env/lib/python3.7/site-packages/torchvision torchvision arch flags 3.5, 5.0, 6.0, 7.0, 7.5, 8.0 fvcore 0.1.3.post20210213 cv2 3.4.2


PyTorch built with:

github-actions[bot] commented 3 years ago

You've chosen to report an unexpected problem or bug. Unless you already know the root cause of it, please include details about it by filling the issue template. The following information is missing: "Instructions To Reproduce the Issue and Full Logs";

ppwwyyxx commented 3 years ago

This is because different versions of libjpeg decodes the jpeg images slightly differently. This is unfortunate but we probably have to live with it, unless there are better ideas.