Open m-kashani opened 4 years ago
FCNN_ROW1:
YAML_FILE = Faster_ROW1
GeneralizedRCNN( (backbone): ResNet( (stem): BasicStem( (conv1): Conv2d( 3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) ) (res2): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv1): Conv2d( 64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) ) (res3): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv1): Conv2d( 256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) ) (res4): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) (conv1): Conv2d( 512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (4): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (5): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) ) ) (proposal_generator): RPN( (anchor_generator): DefaultAnchorGenerator( (cell_anchors): BufferList() ) (rpn_head): StandardRPNHead( (conv): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (objectness_logits): Conv2d(1024, 15, kernel_size=(1, 1), stride=(1, 1)) (anchor_deltas): Conv2d(1024, 60, kernel_size=(1, 1), stride=(1, 1)) ) ) (roi_heads): Res5ROIHeads( (pooler): ROIPooler( (level_poolers): ModuleList( (0): ROIAlign(output_size=(14, 14), spatial_scale=0.0625, sampling_ratio=0, aligned=True) ) ) (res5): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) (conv1): Conv2d( 1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) ) (box_predictor): FastRCNNOutputLayers( (cls_score): Linear(in_features=2048, out_features=11, bias=True) (bbox_pred): Linear(in_features=2048, out_features=40, bias=True) ) ) ) [04/27 11:10:16 d2.data.build]: Removed 0 images with no usable annotations. 106 images left.
[04/27 11:10:16 d2.data.build]: Distribution of instances among all 10 categories: | category | #instances | category | #instances | category | #instances |
---|---|---|---|---|---|---|
Past | 283 | Gorgonia | 701 | SeaRods | 185 | |
Antillo | 544 | Fish | 211 | Ssid | 29 | |
Orb | 92 | Other_Coral | 48 | Apalm | 218 | |
Galaxaura | 804 | |||||
total | 3115 |
[04/27 11:10:16 d2.data.common]: Serializing 106 elements to byte tensors and concatenating them all ... [04/27 11:10:16 d2.data.common]: Serialized dataset takes 0.22 MiB [04/27 11:10:16 d2.data.detection_utils]: TransformGens used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()] [04/27 11:10:16 d2.data.build]: Using training sampler TrainingSampler 'roi_heads.box_predictor.cls_score.weight' has shape (81, 2048) in the checkpoint but (11, 2048) in the model! Skipped. 'roi_heads.box_predictor.cls_score.bias' has shape (81,) in the checkpoint but (11,) in the model! Skipped. 'roi_heads.box_predictor.bbox_pred.weight' has shape (320, 2048) in the checkpoint but (40, 2048) in the model! Skipped. 'roi_heads.box_predictor.bbox_pred.bias' has shape (320,) in the checkpoint but (40,) in the model! Skipped. [04/27 11:10:34 d2.engine.train_loop]: Starting training from iteration 0 [04/27 11:11:14 d2.utils.events]: eta: 0:09:10 iter: 19 total_loss: 4.351 loss_cls: 2.383 loss_box_reg: 0.719 loss_rpn_cls: 1.071 loss_rpn_loc: 0.224 time: 1.9334 data_time: 0.9603 lr: 0.000020 max_mem: 5464M [04/27 11:11:52 d2.utils.events]: eta: 0:08:19 iter: 39 total_loss: 3.649 loss_cls: 1.994 loss_box_reg: 0.742 loss_rpn_cls: 0.651 loss_rpn_loc: 0.200 time: 1.9001 data_time: 0.8598 lr: 0.000040 max_mem: 5464M [04/27 11:12:30 d2.utils.events]: eta: 0:07:38 iter: 59 total_loss: 2.806 loss_cls: 1.357 loss_box_reg: 0.765 loss_rpn_cls: 0.491 loss_rpn_loc: 0.202 time: 1.8955 data_time: 0.8656 lr: 0.000060 max_mem: 5464M [04/27 11:13:06 d2.utils.events]: eta: 0:06:57 iter: 79 total_loss: 2.496 loss_cls: 1.090 loss_box_reg: 0.794 loss_rpn_cls: 0.429 loss_rpn_loc: 0.189 time: 1.8791 data_time: 0.8138 lr: 0.000080 max_mem: 5464M [04/27 11:13:43 d2.utils.events]: eta: 0:06:16 iter: 99 total_loss: 2.408 loss_cls: 1.050 loss_box_reg: 0.794 loss_rpn_cls: 0.403 loss_rpn_loc: 0.192 time: 1.8682 data_time: 0.7950 lr: 0.000100 max_mem: 5464M [04/27 11:14:20 d2.utils.events]: eta: 0:05:38 iter: 119 total_loss: 2.393 loss_cls: 1.018 loss_box_reg: 0.793 loss_rpn_cls: 0.386 loss_rpn_loc: 0.185 time: 1.8681 data_time: 0.8437 lr: 0.000120 max_mem: 5464M [04/27 11:14:57 d2.utils.events]: eta: 0:05:00 iter: 139 total_loss: 2.333 loss_cls: 0.969 loss_box_reg: 0.805 loss_rpn_cls: 0.365 loss_rpn_loc: 0.187 time: 1.8623 data_time: 0.7956 lr: 0.000140 max_mem: 5464M [04/27 11:15:33 d2.utils.events]: eta: 0:04:22 iter: 159 total_loss: 2.307 loss_cls: 0.956 loss_box_reg: 0.825 loss_rpn_cls: 0.343 loss_rpn_loc: 0.186 time: 1.8577 data_time: 0.8010 lr: 0.000160 max_mem: 5464M [04/27 11:16:09 d2.utils.events]: eta: 0:03:43 iter: 179 total_loss: 2.267 loss_cls: 0.937 loss_box_reg: 0.829 loss_rpn_cls: 0.321 loss_rpn_loc: 0.185 time: 1.8508 data_time: 0.7746 lr: 0.000180 max_mem: 5464M
[04/27 11:16:46 d2.data.build]: Distribution of instances among all 10 categories: | category | #instances | category | #instances | category | #instances |
---|---|---|---|---|---|---|
Past | 43 | Gorgonia | 179 | SeaRods | 36 | |
Antillo | 127 | Fish | 59 | Ssid | 1 | |
Orb | 26 | Other_Coral | 10 | Apalm | 56 | |
Galaxaura | 347 | |||||
total | 884 |
[04/27 11:16:46 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/27 11:16:46 d2.data.common]: Serialized dataset takes 0.06 MiB WARNING [04/27 11:16:46 d2.evaluation.coco_evaluation]: json_file was not found in MetaDataCatalog for 'CoralReef_val'. Trying to convert it to COCO format ... [04/27 11:16:46 d2.data.datasets.coco]: Converting annotations of dataset 'CoralReef_val' to COCO format ...) [04/27 11:16:46 d2.data.datasets.coco]: Converting dataset dicts into COCO format [04/27 11:16:47 d2.data.datasets.coco]: Conversion finished, num images: 27, num annotations: 884 [04/27 11:16:47 d2.data.datasets.coco]: Caching COCO format annotations at 'coco_eval/CoralReef_val_coco_format.json' ... [04/27 11:16:47 d2.evaluation.evaluator]: Start inference on 27 images [04/27 11:16:51 d2.evaluation.evaluator]: Inference done 11/27. 0.2513 s / img. ETA=0:00:04 [04/27 11:16:56 d2.evaluation.evaluator]: Total inference time: 0:00:05.739530 (0.260888 s / img per device, on 1 devices) [04/27 11:16:56 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:05 (0.246651 s / img per device, on 1 devices) [04/27 11:16:56 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/27 11:16:56 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/27 11:16:56 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=0.95s). Accumulating evaluation results... DONE (t=0.04s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.020 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.053 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.010 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.003 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.022 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.009 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.043 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.074 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.016 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.081
[04/27 11:16:57 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
1.988 | 5.349 | 0.996 | nan | 0.336 | 2.181 |
[04/27 11:16:57 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/27 11:16:57 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 0.000 | Gorgonia | 6.591 | SeaRods | 0.000 | |
Antillo | 5.834 | Fish | 0.000 | Ssid | 0.000 | |
Orb | 0.000 | Other_Coral | 0.000 | Apalm | 4.923 | |
Galaxaura | 2.531 |
[04/27 11:16:57 d2.engine.defaults]: Evaluation results for CoralReef_val in csv format: [04/27 11:16:57 d2.evaluation.testing]: copypaste: Task: bbox [04/27 11:16:57 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/27 11:16:57 d2.evaluation.testing]: copypaste: 1.9879,5.3489,0.9960,nan,0.3363,2.1811 [04/27 11:16:57 d2.utils.events]: eta: 0:03:05 iter: 199 total_loss: 2.232 loss_cls: 0.904 loss_box_reg: 0.838 loss_rpn_cls: 0.307 loss_rpn_loc: 0.182 time: 1.8467 data_time: 0.7843 lr: 0.000200 max_mem: 5464M [04/27 11:17:32 d2.utils.events]: eta: 0:02:28 iter: 219 total_loss: 2.197 loss_cls: 0.869 loss_box_reg: 0.836 loss_rpn_cls: 0.303 loss_rpn_loc: 0.180 time: 1.8395 data_time: 0.7376 lr: 0.000220 max_mem: 5464M [04/27 11:18:08 d2.utils.events]: eta: 0:01:51 iter: 239 total_loss: 2.155 loss_cls: 0.845 loss_box_reg: 0.880 loss_rpn_cls: 0.278 loss_rpn_loc: 0.175 time: 1.8372 data_time: 0.7872 lr: 0.000240 max_mem: 5464M [04/27 11:18:45 d2.utils.events]: eta: 0:01:14 iter: 259 total_loss: 2.066 loss_cls: 0.807 loss_box_reg: 0.823 loss_rpn_cls: 0.264 loss_rpn_loc: 0.190 time: 1.8367 data_time: 0.7943 lr: 0.000260 max_mem: 5464M [04/27 11:19:21 d2.utils.events]: eta: 0:00:38 iter: 279 total_loss: 2.057 loss_cls: 0.782 loss_box_reg: 0.838 loss_rpn_cls: 0.265 loss_rpn_loc: 0.186 time: 1.8332 data_time: 0.7579 lr: 0.000280 max_mem: 5464M [04/27 11:19:59 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/27 11:19:59 d2.data.common]: Serialized dataset takes 0.06 MiB [04/27 11:19:59 d2.evaluation.evaluator]: Start inference on 27 images [04/27 11:20:03 d2.evaluation.evaluator]: Inference done 11/27. 0.2542 s / img. ETA=0:00:04 [04/27 11:20:08 d2.evaluation.evaluator]: Total inference time: 0:00:06.135971 (0.278908 s / img per device, on 1 devices) [04/27 11:20:08 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:05 (0.248344 s / img per device, on 1 devices) [04/27 11:20:08 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/27 11:20:08 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/27 11:20:08 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.01s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=1.25s). Accumulating evaluation results... DONE (t=0.05s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.039 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.095 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.027 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.006 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.043 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.019 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.065 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.108 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.022 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.118
[04/27 11:20:09 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
3.925 | 9.492 | 2.737 | nan | 0.585 | 4.252 |
[04/27 11:20:09 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/27 11:20:09 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 0.000 | Gorgonia | 9.038 | SeaRods | 0.000 | |
Antillo | 8.096 | Fish | 0.000 | Ssid | 0.000 | |
Orb | 6.733 | Other_Coral | 0.000 | Apalm | 9.739 | |
Galaxaura | 5.643 |
[04/27 11:20:09 d2.engine.defaults]: Evaluation results for CoralReef_val in csv format: [04/27 11:20:09 d2.evaluation.testing]: copypaste: Task: bbox [04/27 11:20:09 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/27 11:20:09 d2.evaluation.testing]: copypaste: 3.9248,9.4915,2.7366,nan,0.5847,4.2522 [04/27 11:20:09 d2.utils.events]: eta: 0:00:01 iter: 299 total_loss: 2.021 loss_cls: 0.723 loss_box_reg: 0.855 loss_rpn_cls: 0.264 loss_rpn_loc: 0.180 time: 1.8328 data_time: 0.8009 lr: 0.000300 max_mem: 5464M [04/27 11:20:09 d2.engine.hooks]: Overall training speed: 297 iterations in 0:09:06 (1.8390 s / it) [04/27 11:20:09 d2.engine.hooks]: Total training time: 0:09:29 (0:00:23 on hooks)
FCNN_ROW2:
YAML_FILE = "COCO-Detection/faster_rcnn_R_50_DC5_1x.yaml": "137847829/model_final_51d356.pkl"
GeneralizedRCNN( (backbone): ResNet( (stem): BasicStem( (conv1): Conv2d( 3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) ) (res2): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv1): Conv2d( 64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) ) (res3): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv1): Conv2d( 256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) ) (res4): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) (conv1): Conv2d( 512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (4): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (5): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) ) (res5): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) (conv1): Conv2d( 1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) ) ) (proposal_generator): RPN( (anchor_generator): DefaultAnchorGenerator( (cell_anchors): BufferList() ) (rpn_head): StandardRPNHead( (conv): Conv2d(2048, 2048, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (objectness_logits): Conv2d(2048, 15, kernel_size=(1, 1), stride=(1, 1)) (anchor_deltas): Conv2d(2048, 60, kernel_size=(1, 1), stride=(1, 1)) ) ) (roi_heads): StandardROIHeads( (box_pooler): ROIPooler( (level_poolers): ModuleList( (0): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True) ) ) (box_head): FastRCNNConvFCHead( (fc1): Linear(in_features=100352, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=1024, bias=True) ) (box_predictor): FastRCNNOutputLayers( (cls_score): Linear(in_features=1024, out_features=11, bias=True) (bbox_pred): Linear(in_features=1024, out_features=40, bias=True) ) ) ) [04/27 08:44:11 d2.data.build]: Removed 0 images with no usable annotations. 106 images left. [04/27 08:44:11 d2.data.common]: Serializing 106 elements to byte tensors and concatenating them all ... [04/27 08:44:11 d2.data.common]: Serialized dataset takes 0.22 MiB [04/27 08:44:11 d2.data.detection_utils]: TransformGens used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()] [04/27 08:44:11 d2.data.build]: Using training sampler TrainingSampler 'roi_heads.box_predictor.cls_score.weight' has shape (81, 1024) in the checkpoint but (11, 1024) in the model! Skipped. 'roi_heads.box_predictor.cls_score.bias' has shape (81,) in the checkpoint but (11,) in the model! Skipped. 'roi_heads.box_predictor.bbox_pred.weight' has shape (320, 1024) in the checkpoint but (40, 1024) in the model! Skipped. 'roi_heads.box_predictor.bbox_pred.bias' has shape (320,) in the checkpoint but (40,) in the model! Skipped. [04/27 08:44:12 d2.engine.train_loop]: Starting training from iteration 0 [04/27 08:45:16 d2.utils.events]: eta: 0:14:54 iter: 19 total_loss: 4.748 loss_cls: 2.411 loss_box_reg: 0.766 loss_rpn_cls: 1.322 loss_rpn_loc: 0.202 time: 3.1720 data_time: 0.1939 lr: 0.000020 max_mem: 10574M [04/27 08:46:19 d2.utils.events]: eta: 0:13:51 iter: 39 total_loss: 3.631 loss_cls: 2.068 loss_box_reg: 0.730 loss_rpn_cls: 0.602 loss_rpn_loc: 0.191 time: 3.1791 data_time: 0.0452 lr: 0.000040 max_mem: 10574M [04/27 08:47:23 d2.utils.events]: eta: 0:12:47 iter: 59 total_loss: 2.964 loss_cls: 1.538 loss_box_reg: 0.760 loss_rpn_cls: 0.475 loss_rpn_loc: 0.187 time: 3.1804 data_time: 0.0483 lr: 0.000060 max_mem: 10574M [04/27 08:48:27 d2.utils.events]: eta: 0:11:43 iter: 79 total_loss: 2.508 loss_cls: 1.131 loss_box_reg: 0.753 loss_rpn_cls: 0.456 loss_rpn_loc: 0.178 time: 3.1796 data_time: 0.0460 lr: 0.000080 max_mem: 10574M [04/27 08:49:30 d2.utils.events]: eta: 0:10:39 iter: 99 total_loss: 2.430 loss_cls: 1.079 loss_box_reg: 0.788 loss_rpn_cls: 0.389 loss_rpn_loc: 0.180 time: 3.1789 data_time: 0.0451 lr: 0.000100 max_mem: 10574M [04/27 08:50:33 d2.utils.events]: eta: 0:09:35 iter: 119 total_loss: 2.370 loss_cls: 1.048 loss_box_reg: 0.792 loss_rpn_cls: 0.370 loss_rpn_loc: 0.179 time: 3.1763 data_time: 0.0440 lr: 0.000120 max_mem: 10574M [04/27 08:51:37 d2.utils.events]: eta: 0:08:31 iter: 139 total_loss: 2.332 loss_cls: 1.019 loss_box_reg: 0.805 loss_rpn_cls: 0.346 loss_rpn_loc: 0.178 time: 3.1760 data_time: 0.0403 lr: 0.000140 max_mem: 10574M [04/27 08:52:40 d2.utils.events]: eta: 0:07:28 iter: 159 total_loss: 2.294 loss_cls: 0.993 loss_box_reg: 0.816 loss_rpn_cls: 0.307 loss_rpn_loc: 0.171 time: 3.1747 data_time: 0.0429 lr: 0.000160 max_mem: 10574M [04/27 08:53:44 d2.utils.events]: eta: 0:06:24 iter: 179 total_loss: 2.253 loss_cls: 0.967 loss_box_reg: 0.817 loss_rpn_cls: 0.298 loss_rpn_loc: 0.165 time: 3.1744 data_time: 0.0455 lr: 0.000180 max_mem: 10574M [04/27 08:54:47 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/27 08:54:47 d2.data.common]: Serialized dataset takes 0.06 MiB [04/27 08:54:47 d2.evaluation.evaluator]: Start inference on 27 images [04/27 08:54:52 d2.evaluation.evaluator]: Inference done 11/27. 0.1547 s / img. ETA=0:00:03 [04/27 08:54:55 d2.evaluation.evaluator]: Total inference time: 0:00:04.464599 (0.202936 s / img per device, on 1 devices) [04/27 08:54:55 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:03 (0.149450 s / img per device, on 1 devices) [04/27 08:54:55 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/27 08:54:55 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/27 08:54:55 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=0.92s). Accumulating evaluation results... DONE (t=0.04s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.010 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.032 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.004 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.001 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.011 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.005 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.023 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.047 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.016 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.050
[04/27 08:54:56 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
1.000 | 3.209 | 0.402 | nan | 0.141 | 1.094 |
[04/27 08:54:56 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/27 08:54:56 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 0.000 | Gorgonia | 5.339 | SeaRods | 0.000 | |
Antillo | 1.601 | Fish | 0.000 | Ssid | 0.000 | |
Orb | 0.000 | Other_Coral | 0.000 | Apalm | 0.624 | |
Galaxaura | 2.438 |
[04/27 08:54:56 d2.engine.defaults]: Evaluation results for CoralReef_val in csv format: [04/27 08:54:56 d2.evaluation.testing]: copypaste: Task: bbox [04/27 08:54:56 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/27 08:54:56 d2.evaluation.testing]: copypaste: 1.0001,3.2089,0.4019,nan,0.1411,1.0939 [04/27 08:54:56 d2.utils.events]: eta: 0:05:20 iter: 199 total_loss: 2.239 loss_cls: 0.937 loss_box_reg: 0.835 loss_rpn_cls: 0.282 loss_rpn_loc: 0.171 time: 3.1740 data_time: 0.0413 lr: 0.000200 max_mem: 10574M [04/27 08:55:57 d2.utils.events]: eta: 0:04:17 iter: 219 total_loss: 2.200 loss_cls: 0.932 loss_box_reg: 0.846 loss_rpn_cls: 0.264 loss_rpn_loc: 0.169 time: 3.1634 data_time: 0.0812 lr: 0.000220 max_mem: 10574M [04/27 08:57:00 d2.utils.events]: eta: 0:03:13 iter: 239 total_loss: 2.149 loss_cls: 0.890 loss_box_reg: 0.843 loss_rpn_cls: 0.235 loss_rpn_loc: 0.156 time: 3.1635 data_time: 0.0429 lr: 0.000240 max_mem: 10574M [04/27 08:58:04 d2.utils.events]: eta: 0:02:10 iter: 259 total_loss: 2.089 loss_cls: 0.867 loss_box_reg: 0.834 loss_rpn_cls: 0.233 loss_rpn_loc: 0.161 time: 3.1635 data_time: 0.0453 lr: 0.000260 max_mem: 10574M [04/27 08:59:07 d2.utils.events]: eta: 0:01:06 iter: 279 total_loss: 2.075 loss_cls: 0.834 loss_box_reg: 0.855 loss_rpn_cls: 0.225 loss_rpn_loc: 0.167 time: 3.1640 data_time: 0.0423 lr: 0.000280 max_mem: 10574M [04/27 09:00:18 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/27 09:00:18 d2.data.common]: Serialized dataset takes 0.06 MiB [04/27 09:00:18 d2.evaluation.evaluator]: Start inference on 27 images [04/27 09:00:21 d2.evaluation.evaluator]: Inference done 11/27. 0.1538 s / img. ETA=0:00:03 [04/27 09:00:25 d2.evaluation.evaluator]: Total inference time: 0:00:04.724099 (0.214732 s / img per device, on 1 devices) [04/27 09:00:25 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:03 (0.150269 s / img per device, on 1 devices) [04/27 09:00:25 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/27 09:00:25 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/27 09:00:25 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.14s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=1.12s). Accumulating evaluation results... DONE (t=0.05s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.028 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.081 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.012 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.002 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.031 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.010 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.047 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.079 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.013 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.086
[04/27 09:00:26 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
2.839 | 8.094 | 1.208 | nan | 0.187 | 3.131 |
[04/27 09:00:26 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/27 09:00:26 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 0.000 | Gorgonia | 10.294 | SeaRods | 0.000 | |
Antillo | 4.436 | Fish | 0.000 | Ssid | 0.000 | |
Orb | 0.000 | Other_Coral | 0.000 | Apalm | 7.897 | |
Galaxaura | 5.759 |
[04/27 09:00:26 d2.engine.defaults]: Evaluation results for CoralReef_val in csv format: [04/27 09:00:26 d2.evaluation.testing]: copypaste: Task: bbox [04/27 09:00:26 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/27 09:00:26 d2.evaluation.testing]: copypaste: 2.8386,8.0935,1.2079,nan,0.1866,3.1309 [04/27 09:00:26 d2.utils.events]: eta: 0:00:03 iter: 299 total_loss: 2.013 loss_cls: 0.788 loss_box_reg: 0.835 loss_rpn_cls: 0.206 loss_rpn_loc: 0.151 time: 3.1626 data_time: 0.0446 lr: 0.000300 max_mem: 10574M [04/27 09:00:26 d2.engine.hooks]: Overall training speed: 297 iterations in 0:15:42 (3.1733 s / it) [04/27 09:00:26 d2.engine.hooks]: Total training time: 0:16:07 (0:00:25 on hooks)
FCNN_ROW3:
YAML_FILE = COCO-Detection/faster_rcnn_R_50_FPN_1x.yaml" [04/27 09:47:28 d2.engine.defaults]: Model:
GeneralizedRCNN( (backbone): FPN( (fpn_lateral2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (top_block): LastLevelMaxPool() (bottom_up): ResNet( (stem): BasicStem( (conv1): Conv2d( 3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) ) (res2): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv1): Conv2d( 64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) ) (res3): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv1): Conv2d( 256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) ) (res4): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) (conv1): Conv2d( 512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (4): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (5): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) ) (res5): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) (conv1): Conv2d( 1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) ) ) ) (proposal_generator): RPN( (anchor_generator): DefaultAnchorGenerator( (cell_anchors): BufferList() ) (rpn_head): StandardRPNHead( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (objectness_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1)) (anchor_deltas): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1)) ) ) (roi_heads): StandardROIHeads( (box_pooler): ROIPooler( (level_poolers): ModuleList( (0): ROIAlign(output_size=(7, 7), spatial_scale=0.25, sampling_ratio=0, aligned=True) (1): ROIAlign(output_size=(7, 7), spatial_scale=0.125, sampling_ratio=0, aligned=True) (2): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True) (3): ROIAlign(output_size=(7, 7), spatial_scale=0.03125, sampling_ratio=0, aligned=True) ) ) (box_head): FastRCNNConvFCHead( (fc1): Linear(in_features=12544, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=1024, bias=True) ) (box_predictor): FastRCNNOutputLayers( (cls_score): Linear(in_features=1024, out_features=11, bias=True) (bbox_pred): Linear(in_features=1024, out_features=40, bias=True) ) ) ) [04/27 09:47:29 d2.data.build]: Removed 0 images with no usable annotations. 106 images left. [04/27 09:47:29 d2.data.common]: Serializing 106 elements to byte tensors and concatenating them all ... [04/27 09:47:29 d2.data.common]: Serialized dataset takes 0.22 MiB [04/27 09:47:29 d2.data.detection_utils]: TransformGens used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()] [04/27 09:47:29 d2.data.build]: Using training sampler TrainingSampler 'roi_heads.box_predictor.cls_score.weight' has shape (81, 1024) in the checkpoint but (11, 1024) in the model! Skipped. 'roi_heads.box_predictor.cls_score.bias' has shape (81,) in the checkpoint but (11,) in the model! Skipped. 'roi_heads.box_predictor.bbox_pred.weight' has shape (320, 1024) in the checkpoint but (40, 1024) in the model! Skipped. 'roi_heads.box_predictor.bbox_pred.bias' has shape (320,) in the checkpoint but (40,) in the model! Skipped. [04/27 09:47:29 d2.engine.train_loop]: Starting training from iteration 0 [04/27 09:48:07 d2.utils.events]: eta: 0:08:42 iter: 19 total_loss: 4.331 loss_cls: 2.326 loss_box_reg: 0.626 loss_rpn_cls: 1.241 loss_rpn_loc: 0.210 time: 1.8436 data_time: 0.9427 lr: 0.000020 max_mem: 11910M [04/27 09:48:43 d2.utils.events]: eta: 0:08:00 iter: 39 total_loss: 3.173 loss_cls: 1.934 loss_box_reg: 0.602 loss_rpn_cls: 0.440 loss_rpn_loc: 0.195 time: 1.8273 data_time: 0.8459 lr: 0.000040 max_mem: 11910M [04/27 09:49:20 d2.utils.events]: eta: 0:07:20 iter: 59 total_loss: 2.393 loss_cls: 1.207 loss_box_reg: 0.613 loss_rpn_cls: 0.317 loss_rpn_loc: 0.195 time: 1.8216 data_time: 0.8392 lr: 0.000060 max_mem: 11910M [04/27 09:49:56 d2.utils.events]: eta: 0:06:41 iter: 79 total_loss: 2.204 loss_cls: 1.078 loss_box_reg: 0.648 loss_rpn_cls: 0.300 loss_rpn_loc: 0.195 time: 1.8179 data_time: 0.8349 lr: 0.000080 max_mem: 11910M [04/27 09:50:31 d2.utils.events]: eta: 0:06:03 iter: 99 total_loss: 2.195 loss_cls: 1.052 loss_box_reg: 0.644 loss_rpn_cls: 0.277 loss_rpn_loc: 0.183 time: 1.8108 data_time: 0.8062 lr: 0.000100 max_mem: 11910M [04/27 09:51:07 d2.utils.events]: eta: 0:05:26 iter: 119 total_loss: 2.184 loss_cls: 1.033 loss_box_reg: 0.673 loss_rpn_cls: 0.274 loss_rpn_loc: 0.192 time: 1.8036 data_time: 0.7839 lr: 0.000120 max_mem: 11910M [04/27 09:51:42 d2.utils.events]: eta: 0:04:49 iter: 139 total_loss: 2.131 loss_cls: 1.020 loss_box_reg: 0.691 loss_rpn_cls: 0.271 loss_rpn_loc: 0.178 time: 1.7981 data_time: 0.8013 lr: 0.000140 max_mem: 11910M [04/27 09:52:18 d2.utils.events]: eta: 0:04:12 iter: 159 total_loss: 2.181 loss_cls: 1.003 loss_box_reg: 0.708 loss_rpn_cls: 0.269 loss_rpn_loc: 0.194 time: 1.7961 data_time: 0.8017 lr: 0.000160 max_mem: 11910M [04/27 09:52:54 d2.utils.events]: eta: 0:03:37 iter: 179 total_loss: 2.203 loss_cls: 1.008 loss_box_reg: 0.730 loss_rpn_cls: 0.256 loss_rpn_loc: 0.179 time: 1.7955 data_time: 0.8293 lr: 0.000180 max_mem: 11910M [04/27 09:53:30 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/27 09:53:30 d2.data.common]: Serialized dataset takes 0.06 MiB [04/27 09:53:30 d2.evaluation.evaluator]: Start inference on 27 images [04/27 09:53:34 d2.evaluation.evaluator]: Inference done 11/27. 0.1151 s / img. ETA=0:00:03 [04/27 09:53:37 d2.evaluation.evaluator]: Total inference time: 0:00:04.426439 (0.201202 s / img per device, on 1 devices) [04/27 09:53:37 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:02 (0.112587 s / img per device, on 1 devices) [04/27 09:53:37 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/27 09:53:37 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/27 09:53:37 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=0.74s). Accumulating evaluation results... DONE (t=0.04s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.002 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.007 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.001 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.002 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.004 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.002 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.010 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.026 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.016 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.027
[04/27 09:53:38 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
0.235 | 0.693 | 0.100 | nan | 0.164 | 0.358 |
[04/27 09:53:38 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/27 09:53:38 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 0.000 | Gorgonia | 1.405 | SeaRods | 0.000 | |
Antillo | 0.090 | Fish | 0.000 | Ssid | 0.000 | |
Orb | 0.000 | Other_Coral | 0.000 | Apalm | 0.000 | |
Galaxaura | 0.857 |
[04/27 09:53:38 d2.engine.defaults]: Evaluation results for CoralReef_val in csv format: [04/27 09:53:38 d2.evaluation.testing]: copypaste: Task: bbox [04/27 09:53:38 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/27 09:53:38 d2.evaluation.testing]: copypaste: 0.2353,0.6927,0.0999,nan,0.1638,0.3580 [04/27 09:53:38 d2.utils.events]: eta: 0:03:01 iter: 199 total_loss: 2.165 loss_cls: 1.004 loss_box_reg: 0.735 loss_rpn_cls: 0.241 loss_rpn_loc: 0.176 time: 1.7946 data_time: 0.8123 lr: 0.000200 max_mem: 11910M [04/27 09:54:14 d2.utils.events]: eta: 0:02:25 iter: 219 total_loss: 2.127 loss_cls: 0.987 loss_box_reg: 0.739 loss_rpn_cls: 0.229 loss_rpn_loc: 0.181 time: 1.7954 data_time: 0.8078 lr: 0.000220 max_mem: 11910M [04/27 09:54:51 d2.utils.events]: eta: 0:01:49 iter: 239 total_loss: 2.142 loss_cls: 0.987 loss_box_reg: 0.744 loss_rpn_cls: 0.217 loss_rpn_loc: 0.170 time: 1.8015 data_time: 0.8783 lr: 0.000240 max_mem: 11910M [04/27 09:55:27 d2.utils.events]: eta: 0:01:13 iter: 259 total_loss: 2.155 loss_cls: 0.997 loss_box_reg: 0.773 loss_rpn_cls: 0.224 loss_rpn_loc: 0.174 time: 1.8016 data_time: 0.8143 lr: 0.000260 max_mem: 11910M [04/27 09:56:03 d2.utils.events]: eta: 0:00:37 iter: 279 total_loss: 2.108 loss_cls: 0.953 loss_box_reg: 0.779 loss_rpn_cls: 0.195 loss_rpn_loc: 0.172 time: 1.8013 data_time: 0.8157 lr: 0.000280 max_mem: 11910M [04/27 09:56:41 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/27 09:56:41 d2.data.common]: Serialized dataset takes 0.06 MiB [04/27 09:56:41 d2.evaluation.evaluator]: Start inference on 27 images [04/27 09:56:44 d2.evaluation.evaluator]: Inference done 11/27. 0.1216 s / img. ETA=0:00:02 [04/27 09:56:48 d2.evaluation.evaluator]: Total inference time: 0:00:04.381841 (0.199175 s / img per device, on 1 devices) [04/27 09:56:48 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:02 (0.113874 s / img per device, on 1 devices) [04/27 09:56:48 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/27 09:56:48 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/27 09:56:48 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=0.97s). Accumulating evaluation results... DONE (t=0.04s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.016 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.038 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.009 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.005 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.018 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.010 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.030 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.050 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.027 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.053
[04/27 09:56:49 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
1.577 | 3.835 | 0.937 | nan | 0.527 | 1.760 |
[04/27 09:56:49 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/27 09:56:49 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 0.000 | Gorgonia | 7.291 | SeaRods | 0.000 | |
Antillo | 2.409 | Fish | 0.000 | Ssid | 0.000 | |
Orb | 0.000 | Other_Coral | 0.000 | Apalm | 3.834 | |
Galaxaura | 2.231 |
[04/27 09:56:49 d2.engine.defaults]: Evaluation results for CoralReef_val in csv format: [04/27 09:56:49 d2.evaluation.testing]: copypaste: Task: bbox [04/27 09:56:49 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/27 09:56:49 d2.evaluation.testing]: copypaste: 1.5766,3.8349,0.9374,nan,0.5269,1.7596 [04/27 09:56:49 d2.utils.events]: eta: 0:00:01 iter: 299 total_loss: 2.108 loss_cls: 0.947 loss_box_reg: 0.770 loss_rpn_cls: 0.204 loss_rpn_loc: 0.172 time: 1.8000 data_time: 0.8125 lr: 0.000300 max_mem: 11910M [04/27 09:56:49 d2.engine.hooks]: Overall training speed: 297 iterations in 0:08:56 (1.8061 s / it) [04/27 09:56:49 d2.engine.hooks]: Total training time: 0:09:14 (0:00:18 on hooks)
FCNN_ROW4:
YAML_FILE = ""
FCNN_ROW5:
YAML_NAME = Faster_Row5
Faster_Row5 = "COCO-Detection/faster_rcnn_R_50_DC5_3x.yaml"
cfg.DATASETS.TRAIN = (name+"train",)
cfg.DATASETS.TEST = (name+"val",)
cfg.DATALOADER.NUM_WORKERS = 8
cfg.DATALOADER.FILTER_EMPTY_ANNOTATIONS = True
cfg.SOLVER.IMS_PER_BATCH = 4
cfg.SOLVER.BASE_LR = 0.001
# cfg.SOLVER.WARMUP_ITERS = 100
cfg.SOLVER.MAX_ITER = 1000
# cfg.SOLVER.STEPS = (500, 1000)
cfg.SOLVER.GAMMA = 0.05
# The model itself
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 64
cfg.MODEL.ROI_HEADS.NUM_CLASSES = len(classes)
cfg.TEST.EVAL_PERIOD = 400
GeneralizedRCNN(
(backbone): ResNet(
(stem): BasicStem(
(conv1): Conv2d(
3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
)
(res2): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv1): Conv2d(
64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
)
(res3): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv1): Conv2d(
256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(3): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
)
(res4): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
(conv1): Conv2d(
512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(3): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(4): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(5): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
)
(res5): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
(conv1): Conv2d(
1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
)
)
(proposal_generator): RPN(
(anchor_generator): DefaultAnchorGenerator(
(cell_anchors): BufferList()
)
(rpn_head): StandardRPNHead(
(conv): Conv2d(2048, 2048, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(objectness_logits): Conv2d(2048, 15, kernel_size=(1, 1), stride=(1, 1))
(anchor_deltas): Conv2d(2048, 60, kernel_size=(1, 1), stride=(1, 1))
)
)
(roi_heads): StandardROIHeads(
(box_pooler): ROIPooler(
(level_poolers): ModuleList(
(0): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True)
)
)
(box_head): FastRCNNConvFCHead(
(fc1): Linear(in_features=100352, out_features=1024, bias=True)
(fc2): Linear(in_features=1024, out_features=1024, bias=True)
)
(box_predictor): FastRCNNOutputLayers(
(cls_score): Linear(in_features=1024, out_features=11, bias=True)
(bbox_pred): Linear(in_features=1024, out_features=40, bias=True)
)
)
)
[04/30 13:20:57 d2.data.build]: Removed 0 images with no usable annotations. 106 images left.
[04/30 13:20:57 d2.data.common]: Serializing 106 elements to byte tensors and concatenating them all ...
[04/30 13:20:57 d2.data.common]: Serialized dataset takes 0.22 MiB
[04/30 13:20:57 d2.data.detection_utils]: TransformGens used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()]
[04/30 13:20:57 d2.data.build]: Using training sampler TrainingSampler
model_final_68d202.pkl: 663MB [00:27, 24.0MB/s]
'roi_heads.box_predictor.cls_score.weight' has shape (81, 1024) in the checkpoint but (11, 1024) in the model! Skipped.
'roi_heads.box_predictor.cls_score.bias' has shape (81,) in the checkpoint but (11,) in the model! Skipped.
'roi_heads.box_predictor.bbox_pred.weight' has shape (320, 1024) in the checkpoint but (40, 1024) in the model! Skipped.
'roi_heads.box_predictor.bbox_pred.bias' has shape (320,) in the checkpoint but (40,) in the model! Skipped.
[04/30 13:21:26 d2.engine.train_loop]: Starting training from iteration 0
[04/30 13:22:00 d2.utils.events]: eta: 0:26:21 iter: 19 total_loss: 4.827 loss_cls: 2.356 loss_box_reg: 0.722 loss_rpn_cls: 1.528 loss_rpn_loc: 0.197 time: 1.6023 data_time: 0.2165 lr: 0.000020 max_mem: 8913M
[04/30 13:22:32 d2.utils.events]: eta: 0:25:46 iter: 39 total_loss: 3.383 loss_cls: 1.928 loss_box_reg: 0.735 loss_rpn_cls: 0.579 loss_rpn_loc: 0.193 time: 1.5939 data_time: 0.0127 lr: 0.000040 max_mem: 8913M
[04/30 13:23:03 d2.utils.events]: eta: 0:25:15 iter: 59 total_loss: 2.688 loss_cls: 1.186 loss_box_reg: 0.755 loss_rpn_cls: 0.503 loss_rpn_loc: 0.182 time: 1.5893 data_time: 0.0100 lr: 0.000060 max_mem: 8913M
[04/30 13:23:36 d2.utils.events]: eta: 0:24:43 iter: 79 total_loss: 2.463 loss_cls: 1.093 loss_box_reg: 0.784 loss_rpn_cls: 0.400 loss_rpn_loc: 0.182 time: 1.5945 data_time: 0.0136 lr: 0.000080 max_mem: 8913M
[04/30 13:24:07 d2.utils.events]: eta: 0:24:11 iter: 99 total_loss: 2.391 loss_cls: 1.064 loss_box_reg: 0.779 loss_rpn_cls: 0.374 loss_rpn_loc: 0.177 time: 1.5909 data_time: 0.0114 lr: 0.000100 max_mem: 8913M
[04/30 13:24:39 d2.utils.events]: eta: 0:23:39 iter: 119 total_loss: 2.407 loss_cls: 1.045 loss_box_reg: 0.808 loss_rpn_cls: 0.352 loss_rpn_loc: 0.170 time: 1.5918 data_time: 0.0109 lr: 0.000120 max_mem: 8913M
[04/30 13:25:11 d2.utils.events]: eta: 0:23:07 iter: 139 total_loss: 2.341 loss_cls: 1.008 loss_box_reg: 0.832 loss_rpn_cls: 0.304 loss_rpn_loc: 0.168 time: 1.5937 data_time: 0.0109 lr: 0.000140 max_mem: 8913M
[04/30 13:25:42 d2.utils.events]: eta: 0:22:34 iter: 159 total_loss: 2.287 loss_cls: 1.014 loss_box_reg: 0.818 loss_rpn_cls: 0.286 loss_rpn_loc: 0.179 time: 1.5893 data_time: 0.0135 lr: 0.000160 max_mem: 8913M
[04/30 13:26:14 d2.utils.events]: eta: 0:22:01 iter: 179 total_loss: 2.292 loss_cls: 0.952 loss_box_reg: 0.876 loss_rpn_cls: 0.274 loss_rpn_loc: 0.161 time: 1.5901 data_time: 0.0102 lr: 0.000180 max_mem: 8913M
[04/30 13:26:46 d2.utils.events]: eta: 0:21:29 iter: 199 total_loss: 2.232 loss_cls: 0.964 loss_box_reg: 0.846 loss_rpn_cls: 0.273 loss_rpn_loc: 0.161 time: 1.5900 data_time: 0.0118 lr: 0.000200 max_mem: 8913M
[04/30 13:27:18 d2.utils.events]: eta: 0:20:57 iter: 219 total_loss: 2.157 loss_cls: 0.908 loss_box_reg: 0.842 loss_rpn_cls: 0.234 loss_rpn_loc: 0.162 time: 1.5913 data_time: 0.0120 lr: 0.000220 max_mem: 8913M
[04/30 13:27:50 d2.utils.events]: eta: 0:20:25 iter: 239 total_loss: 2.187 loss_cls: 0.921 loss_box_reg: 0.855 loss_rpn_cls: 0.243 loss_rpn_loc: 0.161 time: 1.5902 data_time: 0.0104 lr: 0.000240 max_mem: 8913M
[04/30 13:28:22 d2.utils.events]: eta: 0:19:52 iter: 259 total_loss: 2.078 loss_cls: 0.858 loss_box_reg: 0.850 loss_rpn_cls: 0.209 loss_rpn_loc: 0.147 time: 1.5902 data_time: 0.0108 lr: 0.000260 max_mem: 8913M
[04/30 13:28:54 d2.utils.events]: eta: 0:19:20 iter: 279 total_loss: 2.123 loss_cls: 0.861 loss_box_reg: 0.871 loss_rpn_cls: 0.210 loss_rpn_loc: 0.152 time: 1.5915 data_time: 0.0120 lr: 0.000280 max_mem: 8913M
[04/30 13:29:26 d2.utils.events]: eta: 0:18:48 iter: 299 total_loss: 2.033 loss_cls: 0.827 loss_box_reg: 0.856 loss_rpn_cls: 0.198 loss_rpn_loc: 0.150 time: 1.5917 data_time: 0.0104 lr: 0.000300 max_mem: 8913M
[04/30 13:29:58 d2.utils.events]: eta: 0:18:16 iter: 319 total_loss: 1.980 loss_cls: 0.794 loss_box_reg: 0.823 loss_rpn_cls: 0.189 loss_rpn_loc: 0.160 time: 1.5927 data_time: 0.0102 lr: 0.000320 max_mem: 8913M
[04/30 13:30:29 d2.utils.events]: eta: 0:17:44 iter: 339 total_loss: 1.922 loss_cls: 0.763 loss_box_reg: 0.840 loss_rpn_cls: 0.186 loss_rpn_loc: 0.147 time: 1.5917 data_time: 0.0129 lr: 0.000340 max_mem: 8913M
[04/30 13:31:01 d2.utils.events]: eta: 0:17:11 iter: 359 total_loss: 1.863 loss_cls: 0.738 loss_box_reg: 0.804 loss_rpn_cls: 0.183 loss_rpn_loc: 0.149 time: 1.5922 data_time: 0.0106 lr: 0.000360 max_mem: 8913M
[04/30 13:31:33 d2.utils.events]: eta: 0:16:39 iter: 379 total_loss: 1.849 loss_cls: 0.750 loss_box_reg: 0.772 loss_rpn_cls: 0.180 loss_rpn_loc: 0.130 time: 1.5921 data_time: 0.0119 lr: 0.000380 max_mem: 8913M
[04/30 13:32:06 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ...
[04/30 13:32:06 d2.data.common]: Serialized dataset takes 0.06 MiB
WARNING [04/30 13:32:06 d2.evaluation.coco_evaluation]: json_file was not found in MetaDataCatalog for 'CoralReef_RCNN_ROW5val'. Trying to convert it to COCO format ...
[04/30 13:32:06 d2.data.datasets.coco]: Converting annotations of dataset 'CoralReef_RCNN_ROW5val' to COCO format ...)
[04/30 13:32:06 d2.data.datasets.coco]: Converting dataset dicts into COCO format
[04/30 13:32:06 d2.data.datasets.coco]: Conversion finished, num images: 27, num annotations: 884
[04/30 13:32:06 d2.data.datasets.coco]: Caching COCO format annotations at 'coco_eval/CoralReef_RCNN_ROW5val_coco_format.json' ...
[04/30 13:32:06 d2.evaluation.evaluator]: Start inference on 27 images
[04/30 13:32:11 d2.evaluation.evaluator]: Inference done 11/27. 0.1799 s / img. ETA=0:00:04
[04/30 13:32:14 d2.evaluation.evaluator]: Total inference time: 0:00:04.861242 (0.220966 s / img per device, on 1 devices)
[04/30 13:32:14 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:03 (0.158855 s / img per device, on 1 devices)
[04/30 13:32:14 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...
[04/30 13:32:14 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json
[04/30 13:32:14 d2.evaluation.coco_evaluation]: Evaluating predictions ...
Loading and preparing results...
DONE (t=0.00s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type bbox
DONE (t=1.33s).
Accumulating evaluation results...
DONE (t=0.05s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.055
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.142
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.036
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.028
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.063
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.019
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.077
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.126
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.063
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.137
[04/30 13:32:16 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
5.535 | 14.185 | 3.560 | nan | 2.795 | 6.273 |
[04/30 13:32:16 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/30 13:32:16 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 4.521 | Gorgonia | 13.760 | SeaRods | 0.396 | |
Antillo | 8.596 | Fish | 0.000 | Ssid | 0.000 | |
Orb | 0.000 | Other_Coral | 0.000 | Apalm | 17.732 | |
Galaxaura | 10.347 |
[04/30 13:32:16 d2.engine.defaults]: Evaluation results for CoralReef_RCNN_ROW5val in csv format: [04/30 13:32:16 d2.evaluation.testing]: copypaste: Task: bbox [04/30 13:32:16 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/30 13:32:16 d2.evaluation.testing]: copypaste: 5.5353,14.1853,3.5603,nan,2.7947,6.2732 [04/30 13:32:16 d2.utils.events]: eta: 0:16:07 iter: 399 total_loss: 1.821 loss_cls: 0.723 loss_box_reg: 0.780 loss_rpn_cls: 0.158 loss_rpn_loc: 0.150 time: 1.5927 data_time: 0.0111 lr: 0.000400 max_mem: 8913M [04/30 13:32:46 d2.utils.events]: eta: 0:15:35 iter: 419 total_loss: 1.703 loss_cls: 0.650 loss_box_reg: 0.743 loss_rpn_cls: 0.156 loss_rpn_loc: 0.138 time: 1.5900 data_time: 0.0134 lr: 0.000420 max_mem: 8913M [04/30 13:33:19 d2.utils.events]: eta: 0:15:03 iter: 439 total_loss: 1.651 loss_cls: 0.632 loss_box_reg: 0.734 loss_rpn_cls: 0.143 loss_rpn_loc: 0.135 time: 1.5908 data_time: 0.0145 lr: 0.000440 max_mem: 8913M [04/30 13:33:50 d2.utils.events]: eta: 0:14:30 iter: 459 total_loss: 1.624 loss_cls: 0.634 loss_box_reg: 0.706 loss_rpn_cls: 0.150 loss_rpn_loc: 0.135 time: 1.5906 data_time: 0.0116 lr: 0.000460 max_mem: 8913M [04/30 13:34:22 d2.utils.events]: eta: 0:13:58 iter: 479 total_loss: 1.534 loss_cls: 0.590 loss_box_reg: 0.695 loss_rpn_cls: 0.139 loss_rpn_loc: 0.120 time: 1.5910 data_time: 0.0102 lr: 0.000480 max_mem: 8913M [04/30 13:34:54 d2.utils.events]: eta: 0:13:26 iter: 499 total_loss: 1.503 loss_cls: 0.581 loss_box_reg: 0.672 loss_rpn_cls: 0.151 loss_rpn_loc: 0.133 time: 1.5909 data_time: 0.0105 lr: 0.000500 max_mem: 8913M [04/30 13:35:26 d2.utils.events]: eta: 0:12:54 iter: 519 total_loss: 1.411 loss_cls: 0.510 loss_box_reg: 0.636 loss_rpn_cls: 0.122 loss_rpn_loc: 0.127 time: 1.5916 data_time: 0.0110 lr: 0.000519 max_mem: 8913M [04/30 13:35:58 d2.utils.events]: eta: 0:12:22 iter: 539 total_loss: 1.463 loss_cls: 0.535 loss_box_reg: 0.664 loss_rpn_cls: 0.128 loss_rpn_loc: 0.126 time: 1.5918 data_time: 0.0112 lr: 0.000539 max_mem: 8913M [04/30 13:36:30 d2.utils.events]: eta: 0:11:49 iter: 559 total_loss: 1.479 loss_cls: 0.511 loss_box_reg: 0.669 loss_rpn_cls: 0.131 loss_rpn_loc: 0.142 time: 1.5921 data_time: 0.0125 lr: 0.000559 max_mem: 8913M [04/30 13:37:02 d2.utils.events]: eta: 0:11:17 iter: 579 total_loss: 1.417 loss_cls: 0.525 loss_box_reg: 0.662 loss_rpn_cls: 0.118 loss_rpn_loc: 0.117 time: 1.5918 data_time: 0.0107 lr: 0.000579 max_mem: 8913M [04/30 13:37:34 d2.utils.events]: eta: 0:10:45 iter: 599 total_loss: 1.352 loss_cls: 0.499 loss_box_reg: 0.632 loss_rpn_cls: 0.118 loss_rpn_loc: 0.114 time: 1.5920 data_time: 0.0100 lr: 0.000599 max_mem: 8913M [04/30 13:38:06 d2.utils.events]: eta: 0:10:13 iter: 619 total_loss: 1.385 loss_cls: 0.500 loss_box_reg: 0.644 loss_rpn_cls: 0.102 loss_rpn_loc: 0.126 time: 1.5926 data_time: 0.0109 lr: 0.000619 max_mem: 8913M [04/30 13:38:38 d2.utils.events]: eta: 0:09:41 iter: 639 total_loss: 1.353 loss_cls: 0.499 loss_box_reg: 0.610 loss_rpn_cls: 0.122 loss_rpn_loc: 0.124 time: 1.5928 data_time: 0.0110 lr: 0.000639 max_mem: 8913M [04/30 13:39:10 d2.utils.events]: eta: 0:09:08 iter: 659 total_loss: 1.268 loss_cls: 0.453 loss_box_reg: 0.597 loss_rpn_cls: 0.111 loss_rpn_loc: 0.118 time: 1.5926 data_time: 0.0099 lr: 0.000659 max_mem: 8913M [04/30 13:39:42 d2.utils.events]: eta: 0:08:36 iter: 679 total_loss: 1.313 loss_cls: 0.455 loss_box_reg: 0.604 loss_rpn_cls: 0.104 loss_rpn_loc: 0.122 time: 1.5926 data_time: 0.0095 lr: 0.000679 max_mem: 8913M [04/30 13:40:14 d2.utils.events]: eta: 0:08:04 iter: 699 total_loss: 1.236 loss_cls: 0.461 loss_box_reg: 0.581 loss_rpn_cls: 0.096 loss_rpn_loc: 0.115 time: 1.5927 data_time: 0.0108 lr: 0.000699 max_mem: 8913M [04/30 13:40:45 d2.utils.events]: eta: 0:07:32 iter: 719 total_loss: 1.299 loss_cls: 0.461 loss_box_reg: 0.597 loss_rpn_cls: 0.085 loss_rpn_loc: 0.122 time: 1.5926 data_time: 0.0099 lr: 0.000719 max_mem: 8913M [04/30 13:41:17 d2.utils.events]: eta: 0:07:00 iter: 739 total_loss: 1.247 loss_cls: 0.456 loss_box_reg: 0.604 loss_rpn_cls: 0.086 loss_rpn_loc: 0.109 time: 1.5921 data_time: 0.0109 lr: 0.000739 max_mem: 8913M [04/30 13:41:49 d2.utils.events]: eta: 0:06:27 iter: 759 total_loss: 1.218 loss_cls: 0.429 loss_box_reg: 0.581 loss_rpn_cls: 0.094 loss_rpn_loc: 0.125 time: 1.5919 data_time: 0.0119 lr: 0.000759 max_mem: 8913M [04/30 13:42:20 d2.utils.events]: eta: 0:05:55 iter: 779 total_loss: 1.160 loss_cls: 0.415 loss_box_reg: 0.566 loss_rpn_cls: 0.077 loss_rpn_loc: 0.105 time: 1.5911 data_time: 0.0104 lr: 0.000779 max_mem: 8913M [04/30 13:42:52 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/30 13:42:52 d2.data.common]: Serialized dataset takes 0.06 MiB [04/30 13:42:52 d2.evaluation.evaluator]: Start inference on 27 images [04/30 13:42:57 d2.evaluation.evaluator]: Inference done 11/27. 0.1636 s / img. ETA=0:00:03 [04/30 13:43:00 d2.evaluation.evaluator]: Total inference time: 0:00:04.682457 (0.212839 s / img per device, on 1 devices) [04/30 13:43:00 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:03 (0.153637 s / img per device, on 1 devices) [04/30 13:43:00 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/30 13:43:00 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/30 13:43:00 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=1.11s). Accumulating evaluation results... DONE (t=0.06s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.140 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.325 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.110 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.109 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.158 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.067 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.200 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.259 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.191 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.279
[04/30 13:43:02 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
14.014 | 32.499 | 10.999 | nan | 10.918 | 15.757 |
[04/30 13:43:02 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/30 13:43:02 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 31.028 | Gorgonia | 22.844 | SeaRods | 2.572 | |
Antillo | 17.342 | Fish | 2.425 | Ssid | 2.000 | |
Orb | 14.840 | Other_Coral | 0.000 | Apalm | 32.212 | |
Galaxaura | 14.876 |
[04/30 13:43:02 d2.engine.defaults]: Evaluation results for CoralReef_RCNN_ROW5val in csv format: [04/30 13:43:02 d2.evaluation.testing]: copypaste: Task: bbox [04/30 13:43:02 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/30 13:43:02 d2.evaluation.testing]: copypaste: 14.0139,32.4992,10.9994,nan,10.9177,15.7571 [04/30 13:43:02 d2.utils.events]: eta: 0:05:23 iter: 799 total_loss: 1.178 loss_cls: 0.412 loss_box_reg: 0.547 loss_rpn_cls: 0.081 loss_rpn_loc: 0.112 time: 1.5915 data_time: 0.0113 lr: 0.000799 max_mem: 8913M [04/30 13:43:33 d2.utils.events]: eta: 0:04:51 iter: 819 total_loss: 1.163 loss_cls: 0.415 loss_box_reg: 0.540 loss_rpn_cls: 0.081 loss_rpn_loc: 0.112 time: 1.5905 data_time: 0.0112 lr: 0.000819 max_mem: 8913M [04/30 13:44:05 d2.utils.events]: eta: 0:04:19 iter: 839 total_loss: 1.125 loss_cls: 0.398 loss_box_reg: 0.569 loss_rpn_cls: 0.063 loss_rpn_loc: 0.111 time: 1.5906 data_time: 0.0112 lr: 0.000839 max_mem: 8913M [04/30 13:44:36 d2.utils.events]: eta: 0:03:46 iter: 859 total_loss: 1.144 loss_cls: 0.393 loss_box_reg: 0.554 loss_rpn_cls: 0.073 loss_rpn_loc: 0.110 time: 1.5898 data_time: 0.0107 lr: 0.000859 max_mem: 8913M [04/30 13:45:08 d2.utils.events]: eta: 0:03:14 iter: 879 total_loss: 1.091 loss_cls: 0.379 loss_box_reg: 0.519 loss_rpn_cls: 0.082 loss_rpn_loc: 0.111 time: 1.5903 data_time: 0.0114 lr: 0.000879 max_mem: 8913M [04/30 13:45:40 d2.utils.events]: eta: 0:02:42 iter: 899 total_loss: 1.071 loss_cls: 0.388 loss_box_reg: 0.533 loss_rpn_cls: 0.066 loss_rpn_loc: 0.106 time: 1.5906 data_time: 0.0109 lr: 0.000899 max_mem: 8913M [04/30 13:46:12 d2.utils.events]: eta: 0:02:10 iter: 919 total_loss: 1.112 loss_cls: 0.394 loss_box_reg: 0.540 loss_rpn_cls: 0.066 loss_rpn_loc: 0.104 time: 1.5907 data_time: 0.0111 lr: 0.000919 max_mem: 8913M [04/30 13:46:44 d2.utils.events]: eta: 0:01:38 iter: 939 total_loss: 1.050 loss_cls: 0.355 loss_box_reg: 0.516 loss_rpn_cls: 0.079 loss_rpn_loc: 0.105 time: 1.5905 data_time: 0.0112 lr: 0.000939 max_mem: 8913M [04/30 13:47:16 d2.utils.events]: eta: 0:01:05 iter: 959 total_loss: 1.102 loss_cls: 0.393 loss_box_reg: 0.521 loss_rpn_cls: 0.079 loss_rpn_loc: 0.108 time: 1.5910 data_time: 0.0118 lr: 0.000959 max_mem: 8913M [04/30 13:47:48 d2.utils.events]: eta: 0:00:33 iter: 979 total_loss: 0.997 loss_cls: 0.329 loss_box_reg: 0.490 loss_rpn_cls: 0.055 loss_rpn_loc: 0.100 time: 1.5913 data_time: 0.0123 lr: 0.000979 max_mem: 8913M [04/30 13:48:27 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/30 13:48:27 d2.data.common]: Serialized dataset takes 0.06 MiB [04/30 13:48:27 d2.evaluation.evaluator]: Start inference on 27 images [04/30 13:48:32 d2.evaluation.evaluator]: Inference done 11/27. 0.1766 s / img. ETA=0:00:03 [04/30 13:48:35 d2.evaluation.evaluator]: Total inference time: 0:00:04.663571 (0.211980 s / img per device, on 1 devices) [04/30 13:48:35 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:03 (0.155412 s / img per device, on 1 devices) [04/30 13:48:35 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/30 13:48:35 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/30 13:48:35 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=1.30s). Accumulating evaluation results... DONE (t=0.11s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.151 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.339 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.125 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.093 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.171 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.089 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.235 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.288 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.165 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.312
[04/30 13:48:37 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
15.120 | 33.891 | 12.498 | nan | 9.268 | 17.114 |
[04/30 13:48:37 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/30 13:48:37 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 29.079 | Gorgonia | 20.155 | SeaRods | 3.428 | |
Antillo | 17.413 | Fish | 2.993 | Ssid | 3.333 | |
Orb | 24.862 | Other_Coral | 0.363 | Apalm | 33.971 | |
Galaxaura | 15.601 |
[04/30 13:48:37 d2.engine.defaults]: Evaluation results for CoralReef_RCNN_ROW5val in csv format: [04/30 13:48:37 d2.evaluation.testing]: copypaste: Task: bbox [04/30 13:48:37 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/30 13:48:37 d2.evaluation.testing]: copypaste: 15.1199,33.8912,12.4984,nan,9.2679,17.1138 [04/30 13:48:37 d2.utils.events]: eta: 0:00:01 iter: 999 total_loss: 1.025 loss_cls: 0.351 loss_box_reg: 0.511 loss_rpn_cls: 0.063 loss_rpn_loc: 0.102 time: 1.5913 data_time: 0.0096 lr: 0.000999 max_mem: 8913M [04/30 13:48:37 d2.engine.hooks]: Overall training speed: 997 iterations in 0:26:28 (1.5930 s / it) [04/30 13:48:37 d2.engine.hooks]: Total training time: 0:27:05 (0:00:37 on hooks)
FCNN_ROW6:
YAML_NAME = Faster_Row6
Faster_Row6 = "COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"
GeneralizedRCNN( (backbone): FPN( (fpn_lateral2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (top_block): LastLevelMaxPool() (bottom_up): ResNet( (stem): BasicStem( (conv1): Conv2d( 3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) ) (res2): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv1): Conv2d( 64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) ) (res3): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv1): Conv2d( 256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) ) (res4): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) (conv1): Conv2d( 512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (4): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (5): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) ) (res5): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) (conv1): Conv2d( 1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) ) ) ) (proposal_generator): RPN( (anchor_generator): DefaultAnchorGenerator( (cell_anchors): BufferList() ) (rpn_head): StandardRPNHead( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (objectness_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1)) (anchor_deltas): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1)) ) ) (roi_heads): StandardROIHeads( (box_pooler): ROIPooler( (level_poolers): ModuleList( (0): ROIAlign(output_size=(7, 7), spatial_scale=0.25, sampling_ratio=0, aligned=True) (1): ROIAlign(output_size=(7, 7), spatial_scale=0.125, sampling_ratio=0, aligned=True) (2): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True) (3): ROIAlign(output_size=(7, 7), spatial_scale=0.03125, sampling_ratio=0, aligned=True) ) ) (box_head): FastRCNNConvFCHead( (fc1): Linear(in_features=12544, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=1024, bias=True) ) (box_predictor): FastRCNNOutputLayers( (cls_score): Linear(in_features=1024, out_features=11, bias=True) (bbox_pred): Linear(in_features=1024, out_features=40, bias=True) ) ) ) [04/27 13:18:13 d2.data.build]: Removed 0 images with no usable annotations. 106 images left.
[04/27 13:18:13 d2.data.build]: Distribution of instances among all 10 categories: | category | #instances | category | #instances | category | #instances |
---|---|---|---|---|---|---|
Past | 260 | Gorgonia | 730 | SeaRods | 170 | |
Antillo | 545 | Fish | 214 | Ssid | 28 | |
Orb | 89 | Other_Coral | 47 | Apalm | 191 | |
Galaxaura | 924 | |||||
total | 3198 |
[04/27 13:18:13 d2.data.common]: Serializing 106 elements to byte tensors and concatenating them all ... [04/27 13:18:13 d2.data.common]: Serialized dataset takes 0.23 MiB [04/27 13:18:13 d2.data.detection_utils]: TransformGens used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()] [04/27 13:18:13 d2.data.build]: Using training sampler TrainingSampler 'roi_heads.box_predictor.cls_score.weight' has shape (81, 1024) in the checkpoint but (11, 1024) in the model! Skipped. 'roi_heads.box_predictor.cls_score.bias' has shape (81,) in the checkpoint but (11,) in the model! Skipped. 'roi_heads.box_predictor.bbox_pred.weight' has shape (320, 1024) in the checkpoint but (40, 1024) in the model! Skipped. 'roi_heads.box_predictor.bbox_pred.bias' has shape (320,) in the checkpoint but (40,) in the model! Skipped. [04/27 13:18:15 d2.engine.train_loop]: Starting training from iteration 0 [04/27 13:18:55 d2.utils.events]: eta: 0:08:59 iter: 19 total_loss: 4.984 loss_cls: 2.565 loss_box_reg: 0.613 loss_rpn_cls: 1.593 loss_rpn_loc: 0.206 time: 1.9293 data_time: 0.9858 lr: 0.000020 max_mem: 9064M [04/27 13:19:33 d2.utils.events]: eta: 0:08:16 iter: 39 total_loss: 3.417 loss_cls: 2.136 loss_box_reg: 0.611 loss_rpn_cls: 0.443 loss_rpn_loc: 0.205 time: 1.9075 data_time: 0.8638 lr: 0.000040 max_mem: 9064M [04/27 13:20:10 d2.utils.events]: eta: 0:07:38 iter: 59 total_loss: 2.476 loss_cls: 1.352 loss_box_reg: 0.617 loss_rpn_cls: 0.334 loss_rpn_loc: 0.198 time: 1.8974 data_time: 0.8826 lr: 0.000060 max_mem: 9064M [04/27 13:20:48 d2.utils.events]: eta: 0:06:54 iter: 79 total_loss: 2.259 loss_cls: 1.081 loss_box_reg: 0.663 loss_rpn_cls: 0.313 loss_rpn_loc: 0.202 time: 1.8893 data_time: 0.8604 lr: 0.000080 max_mem: 9064M [04/27 13:21:25 d2.utils.events]: eta: 0:06:16 iter: 99 total_loss: 2.179 loss_cls: 1.054 loss_box_reg: 0.663 loss_rpn_cls: 0.289 loss_rpn_loc: 0.190 time: 1.8872 data_time: 0.8619 lr: 0.000100 max_mem: 9064M [04/27 13:22:02 d2.utils.events]: eta: 0:05:38 iter: 119 total_loss: 2.192 loss_cls: 1.030 loss_box_reg: 0.708 loss_rpn_cls: 0.292 loss_rpn_loc: 0.192 time: 1.8817 data_time: 0.8489 lr: 0.000120 max_mem: 9064M [04/27 13:22:39 d2.utils.events]: eta: 0:05:02 iter: 139 total_loss: 2.197 loss_cls: 1.017 loss_box_reg: 0.715 loss_rpn_cls: 0.266 loss_rpn_loc: 0.192 time: 1.8773 data_time: 0.8677 lr: 0.000140 max_mem: 9064M [04/27 13:23:16 d2.utils.events]: eta: 0:04:22 iter: 159 total_loss: 2.187 loss_cls: 1.018 loss_box_reg: 0.734 loss_rpn_cls: 0.260 loss_rpn_loc: 0.179 time: 1.8688 data_time: 0.8055 lr: 0.000160 max_mem: 9064M [04/27 13:23:53 d2.utils.events]: eta: 0:03:45 iter: 179 total_loss: 2.133 loss_cls: 0.985 loss_box_reg: 0.751 loss_rpn_cls: 0.244 loss_rpn_loc: 0.192 time: 1.8665 data_time: 0.8722 lr: 0.000180 max_mem: 9064M
[04/27 13:24:29 d2.data.build]: Distribution of instances among all 10 categories: | category | #instances | category | #instances | category | #instances |
---|---|---|---|---|---|---|
Past | 66 | Gorgonia | 150 | SeaRods | 51 | |
Antillo | 126 | Fish | 56 | Ssid | 2 | |
Orb | 29 | Other_Coral | 11 | Apalm | 83 | |
Galaxaura | 227 | |||||
total | 801 |
[04/27 13:24:29 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/27 13:24:29 d2.data.common]: Serialized dataset takes 0.06 MiB WARNING [04/27 13:24:29 d2.evaluation.coco_evaluation]: json_file was not found in MetaDataCatalog for 'CoralReef_newval'. Trying to convert it to COCO format ... [04/27 13:24:29 d2.data.datasets.coco]: Converting annotations of dataset 'CoralReef_newval' to COCO format ...) [04/27 13:24:30 d2.data.datasets.coco]: Converting dataset dicts into COCO format [04/27 13:24:30 d2.data.datasets.coco]: Conversion finished, num images: 27, num annotations: 801 [04/27 13:24:30 d2.data.datasets.coco]: Caching COCO format annotations at 'coco_eval/CoralReef_newval_coco_format.json' ... [04/27 13:24:30 d2.evaluation.evaluator]: Start inference on 27 images [04/27 13:24:33 d2.evaluation.evaluator]: Inference done 11/27. 0.1223 s / img. ETA=0:00:03 [04/27 13:24:37 d2.evaluation.evaluator]: Total inference time: 0:00:04.562651 (0.207393 s / img per device, on 1 devices) [04/27 13:24:37 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:02 (0.116224 s / img per device, on 1 devices) [04/27 13:24:37 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/27 13:24:37 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/27 13:24:37 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=0.58s). Accumulating evaluation results... DONE (t=0.05s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.004 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.011 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.001 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.005 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.003 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.010 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.028 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.021 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.028
[04/27 13:24:37 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
0.407 | 1.103 | 0.129 | nan | 0.028 | 0.498 |
[04/27 13:24:37 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/27 13:24:37 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 0.000 | Gorgonia | 3.827 | SeaRods | 0.000 | |
Antillo | 0.009 | Fish | 0.000 | Ssid | 0.000 | |
Orb | 0.000 | Other_Coral | 0.000 | Apalm | 0.000 | |
Galaxaura | 0.238 |
[04/27 13:24:37 d2.engine.defaults]: Evaluation results for CoralReef_newval in csv format: [04/27 13:24:37 d2.evaluation.testing]: copypaste: Task: bbox [04/27 13:24:37 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/27 13:24:37 d2.evaluation.testing]: copypaste: 0.4074,1.1030,0.1289,nan,0.0283,0.4977 [04/27 13:24:37 d2.utils.events]: eta: 0:03:07 iter: 199 total_loss: 2.208 loss_cls: 0.993 loss_box_reg: 0.774 loss_rpn_cls: 0.232 loss_rpn_loc: 0.180 time: 1.8612 data_time: 0.8493 lr: 0.000200 max_mem: 9064M [04/27 13:25:13 d2.utils.events]: eta: 0:02:30 iter: 219 total_loss: 2.177 loss_cls: 0.981 loss_box_reg: 0.793 loss_rpn_cls: 0.219 loss_rpn_loc: 0.181 time: 1.8552 data_time: 0.8070 lr: 0.000220 max_mem: 9064M [04/27 13:25:50 d2.utils.events]: eta: 0:01:52 iter: 239 total_loss: 2.144 loss_cls: 0.973 loss_box_reg: 0.779 loss_rpn_cls: 0.221 loss_rpn_loc: 0.177 time: 1.8537 data_time: 0.9013 lr: 0.000240 max_mem: 9064M [04/27 13:26:26 d2.utils.events]: eta: 0:01:15 iter: 259 total_loss: 2.127 loss_cls: 0.956 loss_box_reg: 0.788 loss_rpn_cls: 0.209 loss_rpn_loc: 0.179 time: 1.8480 data_time: 0.8240 lr: 0.000260 max_mem: 9064M [04/27 13:27:02 d2.utils.events]: eta: 0:00:38 iter: 279 total_loss: 2.151 loss_cls: 0.946 loss_box_reg: 0.785 loss_rpn_cls: 0.199 loss_rpn_loc: 0.166 time: 1.8442 data_time: 0.8222 lr: 0.000280 max_mem: 9064M [04/27 13:27:39 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/27 13:27:39 d2.data.common]: Serialized dataset takes 0.06 MiB [04/27 13:27:39 d2.evaluation.evaluator]: Start inference on 27 images [04/27 13:27:43 d2.evaluation.evaluator]: Inference done 11/27. 0.1269 s / img. ETA=0:00:03 [04/27 13:27:46 d2.evaluation.evaluator]: Total inference time: 0:00:04.463164 (0.202871 s / img per device, on 1 devices) [04/27 13:27:46 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:02 (0.113471 s / img per device, on 1 devices) [04/27 13:27:46 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/27 13:27:46 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/27 13:27:46 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=0.72s). Accumulating evaluation results... DONE (t=0.04s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.010 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.029 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.004 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.004 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.011 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.006 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.022 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.051 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.041 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.052
[04/27 13:27:47 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
0.991 | 2.888 | 0.351 | nan | 0.383 | 1.067 |
[04/27 13:27:47 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/27 13:27:47 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 0.000 | Gorgonia | 6.011 | SeaRods | 0.000 | |
Antillo | 0.705 | Fish | 0.000 | Ssid | 0.000 | |
Orb | 0.000 | Other_Coral | 0.000 | Apalm | 0.673 | |
Galaxaura | 2.516 |
[04/27 13:27:47 d2.engine.defaults]: Evaluation results for CoralReef_newval in csv format: [04/27 13:27:47 d2.evaluation.testing]: copypaste: Task: bbox [04/27 13:27:47 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/27 13:27:47 d2.evaluation.testing]: copypaste: 0.9906,2.8880,0.3513,nan,0.3829,1.0672 [04/27 13:27:47 d2.utils.events]: eta: 0:00:01 iter: 299 total_loss: 2.086 loss_cls: 0.909 loss_box_reg: 0.792 loss_rpn_cls: 0.195 loss_rpn_loc: 0.170 time: 1.8406 data_time: 0.8088 lr: 0.000300 max_mem: 9064M [04/27 13:27:47 d2.engine.hooks]: Overall training speed: 297 iterations in 0:09:08 (1.8468 s / it) [04/27 13:27:47 d2.engine.hooks]: Total training time: 0:09:26 (0:00:18 on hooks)
FCNN_ROW7:
YAML_FILE = Faster_Row7
Faster_Row7 = "COCO-Detection/faster_rcnn_R_101_C4_3x.yaml"
GeneralizedRCNN( (backbone): ResNet( (stem): BasicStem( (conv1): Conv2d( 3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) ) (res2): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv1): Conv2d( 64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) ) (res3): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv1): Conv2d( 256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) ) (res4): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) (conv1): Conv2d( 512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (4): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (5): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (6): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (7): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (8): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (9): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (10): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (11): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (12): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (13): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (14): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (15): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (16): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (17): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (18): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (19): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (20): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (21): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (22): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) ) ) (proposal_generator): RPN( (anchor_generator): DefaultAnchorGenerator( (cell_anchors): BufferList() ) (rpn_head): StandardRPNHead( (conv): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (objectness_logits): Conv2d(1024, 15, kernel_size=(1, 1), stride=(1, 1)) (anchor_deltas): Conv2d(1024, 60, kernel_size=(1, 1), stride=(1, 1)) ) ) (roi_heads): Res5ROIHeads( (pooler): ROIPooler( (level_poolers): ModuleList( (0): ROIAlign(output_size=(14, 14), spatial_scale=0.0625, sampling_ratio=0, aligned=True) ) ) (res5): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) (conv1): Conv2d( 1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) ) (box_predictor): FastRCNNOutputLayers( (cls_score): Linear(in_features=2048, out_features=11, bias=True) (bbox_pred): Linear(in_features=2048, out_features=40, bias=True) ) ) ) [04/27 13:53:47 d2.data.build]: Removed 0 images with no usable annotations. 106 images left.
[04/27 13:53:47 d2.data.build]: Distribution of instances among all 10 categories: | category | #instances | category | #instances | category | #instances |
---|---|---|---|---|---|---|
Past | 283 | Gorgonia | 701 | SeaRods | 185 | |
Antillo | 544 | Fish | 211 | Ssid | 29 | |
Orb | 92 | Other_Coral | 48 | Apalm | 218 | |
Galaxaura | 804 | |||||
total | 3115 |
[04/27 13:53:47 d2.data.common]: Serializing 106 elements to byte tensors and concatenating them all ...
[04/27 13:53:47 d2.data.common]: Serialized dataset takes 0.22 MiB
[04/27 13:53:47 d2.data.detection_utils]: TransformGens used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()]
[04/27 13:53:47 d2.data.build]: Using training sampler TrainingSampler
model_final_298dad.pkl: 212MB [00:19, 10.7MB/s]
'roi_heads.box_predictor.cls_score.weight' has shape (81, 2048) in the checkpoint but (11, 2048) in the model! Skipped.
'roi_heads.box_predictor.cls_score.bias' has shape (81,) in the checkpoint but (11,) in the model! Skipped.
'roi_heads.box_predictor.bbox_pred.weight' has shape (320, 2048) in the checkpoint but (40, 2048) in the model! Skipped.
'roi_heads.box_predictor.bbox_pred.bias' has shape (320,) in the checkpoint but (40,) in the model! Skipped.
[04/27 13:54:09 d2.engine.train_loop]: Starting training from iteration 0
[04/27 13:54:50 d2.utils.events]: eta: 0:09:06 iter: 19 total_loss: 4.569 loss_cls: 2.340 loss_box_reg: 0.752 loss_rpn_cls: 1.311 loss_rpn_loc: 0.209 time: 1.9715 data_time: 0.6521 lr: 0.000020 max_mem: 8842M
[04/27 13:55:28 d2.utils.events]: eta: 0:08:22 iter: 39 total_loss: 3.479 loss_cls: 1.927 loss_box_reg: 0.788 loss_rpn_cls: 0.592 loss_rpn_loc: 0.185 time: 1.9353 data_time: 0.5351 lr: 0.000040 max_mem: 8842M
[04/27 13:56:06 d2.utils.events]: eta: 0:07:40 iter: 59 total_loss: 2.677 loss_cls: 1.223 loss_box_reg: 0.775 loss_rpn_cls: 0.459 loss_rpn_loc: 0.185 time: 1.9249 data_time: 0.5414 lr: 0.000060 max_mem: 8842M
[04/27 13:56:45 d2.utils.events]: eta: 0:07:02 iter: 79 total_loss: 2.414 loss_cls: 1.092 loss_box_reg: 0.782 loss_rpn_cls: 0.388 loss_rpn_loc: 0.170 time: 1.9219 data_time: 0.5089 lr: 0.000080 max_mem: 8842M
[04/27 13:57:22 d2.utils.events]: eta: 0:06:23 iter: 99 total_loss: 2.371 loss_cls: 1.042 loss_box_reg: 0.804 loss_rpn_cls: 0.347 loss_rpn_loc: 0.170 time: 1.9154 data_time: 0.4971 lr: 0.000100 max_mem: 8842M
[04/27 13:58:01 d2.utils.events]: eta: 0:05:45 iter: 119 total_loss: 2.322 loss_cls: 1.008 loss_box_reg: 0.837 loss_rpn_cls: 0.314 loss_rpn_loc: 0.160 time: 1.9151 data_time: 0.5051 lr: 0.000120 max_mem: 8842M
[04/27 13:58:39 d2.utils.events]: eta: 0:05:06 iter: 139 total_loss: 2.267 loss_cls: 0.973 loss_box_reg: 0.834 loss_rpn_cls: 0.296 loss_rpn_loc: 0.160 time: 1.9119 data_time: 0.4737 lr: 0.000140 max_mem: 8842M
[04/27 13:59:16 d2.utils.events]: eta: 0:04:29 iter: 159 total_loss: 2.215 loss_cls: 0.950 loss_box_reg: 0.840 loss_rpn_cls: 0.265 loss_rpn_loc: 0.159 time: 1.9096 data_time: 0.4965 lr: 0.000160 max_mem: 8842M
[04/27 13:59:54 d2.utils.events]: eta: 0:03:50 iter: 179 total_loss: 2.190 loss_cls: 0.924 loss_box_reg: 0.828 loss_rpn_cls: 0.258 loss_rpn_loc: 0.160 time: 1.9084 data_time: 0.4717 lr: 0.000180 max_mem: 8842M
[04/27 14:00:33 d2.data.build]: Distribution of instances among all 10 categories: | category | #instances | category | #instances | category | #instances |
---|---|---|---|---|---|---|
Past | 43 | Gorgonia | 179 | SeaRods | 36 | |
Antillo | 127 | Fish | 59 | Ssid | 1 | |
Orb | 26 | Other_Coral | 10 | Apalm | 56 | |
Galaxaura | 347 | |||||
total | 884 |
[04/27 14:00:33 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/27 14:00:33 d2.data.common]: Serialized dataset takes 0.06 MiB WARNING [04/27 14:00:33 d2.evaluation.coco_evaluation]: json_file was not found in MetaDataCatalog for 'CoralReef_newf7val'. Trying to convert it to COCO format ... [04/27 14:00:33 d2.data.datasets.coco]: Converting annotations of dataset 'CoralReef_newf7val' to COCO format ...) [04/27 14:00:33 d2.data.datasets.coco]: Converting dataset dicts into COCO format [04/27 14:00:33 d2.data.datasets.coco]: Conversion finished, num images: 27, num annotations: 884 [04/27 14:00:33 d2.data.datasets.coco]: Caching COCO format annotations at 'coco_eval/CoralReef_newf7val_coco_format.json' ... [04/27 14:00:33 d2.evaluation.evaluator]: Start inference on 27 images [04/27 14:00:37 d2.evaluation.evaluator]: Inference done 11/27. 0.2880 s / img. ETA=0:00:04 [04/27 14:00:42 d2.evaluation.evaluator]: Total inference time: 0:00:06.580198 (0.299100 s / img per device, on 1 devices) [04/27 14:00:42 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:06 (0.277193 s / img per device, on 1 devices) [04/27 14:00:42 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/27 14:00:42 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/27 14:00:42 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.14s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=0.85s). Accumulating evaluation results... DONE (t=0.04s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.011 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.034 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.005 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.001 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.012 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.004 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.025 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.050 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.013 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.054
[04/27 14:00:43 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
1.083 | 3.367 | 0.504 | nan | 0.143 | 1.168 |
[04/27 14:00:43 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/27 14:00:43 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 0.000 | Gorgonia | 4.319 | SeaRods | 0.000 | |
Antillo | 1.794 | Fish | 0.000 | Ssid | 0.000 | |
Orb | 0.000 | Other_Coral | 0.000 | Apalm | 0.000 | |
Galaxaura | 4.718 |
[04/27 14:00:43 d2.engine.defaults]: Evaluation results for CoralReef_newf7val in csv format: [04/27 14:00:43 d2.evaluation.testing]: copypaste: Task: bbox [04/27 14:00:43 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/27 14:00:43 d2.evaluation.testing]: copypaste: 1.0831,3.3672,0.5044,nan,0.1428,1.1682 [04/27 14:00:43 d2.utils.events]: eta: 0:03:12 iter: 199 total_loss: 2.121 loss_cls: 0.891 loss_box_reg: 0.838 loss_rpn_cls: 0.237 loss_rpn_loc: 0.158 time: 1.9071 data_time: 0.4950 lr: 0.000200 max_mem: 8842M [04/27 14:01:21 d2.utils.events]: eta: 0:02:33 iter: 219 total_loss: 2.050 loss_cls: 0.846 loss_box_reg: 0.825 loss_rpn_cls: 0.220 loss_rpn_loc: 0.152 time: 1.9059 data_time: 0.4618 lr: 0.000220 max_mem: 8842M [04/27 14:01:59 d2.utils.events]: eta: 0:01:55 iter: 239 total_loss: 2.030 loss_cls: 0.825 loss_box_reg: 0.835 loss_rpn_cls: 0.211 loss_rpn_loc: 0.147 time: 1.9066 data_time: 0.4669 lr: 0.000240 max_mem: 8842M [04/27 14:02:37 d2.utils.events]: eta: 0:01:17 iter: 259 total_loss: 1.974 loss_cls: 0.776 loss_box_reg: 0.849 loss_rpn_cls: 0.200 loss_rpn_loc: 0.160 time: 1.9054 data_time: 0.4355 lr: 0.000260 max_mem: 8842M [04/27 14:03:16 d2.utils.events]: eta: 0:00:39 iter: 279 total_loss: 1.944 loss_cls: 0.758 loss_box_reg: 0.854 loss_rpn_cls: 0.193 loss_rpn_loc: 0.149 time: 1.9058 data_time: 0.4723 lr: 0.000280 max_mem: 8842M [04/27 14:03:55 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/27 14:03:55 d2.data.common]: Serialized dataset takes 0.06 MiB [04/27 14:03:55 d2.evaluation.evaluator]: Start inference on 27 images [04/27 14:04:00 d2.evaluation.evaluator]: Inference done 11/27. 0.2764 s / img. ETA=0:00:04 [04/27 14:04:05 d2.evaluation.evaluator]: Total inference time: 0:00:06.252762 (0.284216 s / img per device, on 1 devices) [04/27 14:04:05 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:06 (0.272936 s / img per device, on 1 devices) [04/27 14:04:05 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/27 14:04:05 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/27 14:04:05 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.01s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=1.19s). Accumulating evaluation results... DONE (t=0.06s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.134 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.211 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.120 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.015 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.140 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.107 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.158 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.211 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.051 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.223
[04/27 14:04:06 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
13.370 | 21.112 | 12.006 | nan | 1.454 | 13.977 |
[04/27 14:04:06 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/27 14:04:06 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 2.574 | Gorgonia | 12.122 | SeaRods | 0.000 | |
Antillo | 5.882 | Fish | 0.000 | Ssid | 90.000 | |
Orb | 3.366 | Other_Coral | 0.000 | Apalm | 10.855 | |
Galaxaura | 8.898 |
[04/27 14:04:06 d2.engine.defaults]: Evaluation results for CoralReef_newf7val in csv format: [04/27 14:04:06 d2.evaluation.testing]: copypaste: Task: bbox [04/27 14:04:06 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/27 14:04:06 d2.evaluation.testing]: copypaste: 13.3698,21.1121,12.0062,nan,1.4536,13.9767 [04/27 14:04:06 d2.utils.events]: eta: 0:00:01 iter: 299 total_loss: 1.890 loss_cls: 0.709 loss_box_reg: 0.828 loss_rpn_cls: 0.194 loss_rpn_loc: 0.155 time: 1.9036 data_time: 0.4647 lr: 0.000300 max_mem: 8842M [04/27 14:04:06 d2.engine.hooks]: Overall training speed: 297 iterations in 0:09:27 (1.9100 s / it) [04/27 14:04:06 d2.engine.hooks]: Total training time: 0:09:51 (0:00:24 on hooks)
FCNN_ROW8:
YAML_FILE = ""
FCNN_ROW9:
YAML_FILE = ""
FCNN_ROW10:
YAML_FILE = ""
Fast
:
YAML_FILE = "COCO-Detection/fast_rcnn_R_50_FPN_1x.yaml"
Retinanet_Row1
:
YAML_FILE = RetinaNet_Row1
RetinaNet_ROW1="COCO-Detection/retinanet_R_50_FPN_1x.yaml"
RetinaNet( (backbone): FPN( (fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (top_block): LastLevelP6P7( (p6): Conv2d(2048, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (p7): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) ) (bottom_up): ResNet( (stem): BasicStem( (conv1): Conv2d( 3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) ) (res2): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv1): Conv2d( 64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) ) (res3): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv1): Conv2d( 256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) ) (res4): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) (conv1): Conv2d( 512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (4): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (5): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) ) (res5): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) (conv1): Conv2d( 1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) ) ) ) (head): RetinaNetHead( (cls_subnet): Sequential( (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU() (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ReLU() (4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (5): ReLU() (6): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (7): ReLU() ) (bbox_subnet): Sequential( (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU() (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ReLU() (4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (5): ReLU() (6): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (7): ReLU() ) (cls_score): Conv2d(256, 720, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (bbox_pred): Conv2d(256, 36, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (anchor_generator): DefaultAnchorGenerator( (cell_anchors): BufferList() ) ) [04/27 14:34:17 d2.data.build]: Removed 0 images with no usable annotations. 106 images left.
[04/27 14:34:17 d2.data.build]: Distribution of instances among all 10 categories: | category | #instances | category | #instances | category | #instances |
---|---|---|---|---|---|---|
Past | 283 | Gorgonia | 701 | SeaRods | 185 | |
Antillo | 544 | Fish | 211 | Ssid | 29 | |
Orb | 92 | Other_Coral | 48 | Apalm | 218 | |
Galaxaura | 804 | |||||
total | 3115 |
[04/27 14:34:17 d2.data.common]: Serializing 106 elements to byte tensors and concatenating them all ...
[04/27 14:34:17 d2.data.common]: Serialized dataset takes 0.22 MiB
[04/27 14:34:17 d2.data.detection_utils]: TransformGens used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()]
[04/27 14:34:17 d2.data.build]: Using training sampler TrainingSampler
model_final_b796dc.pkl: 152MB [00:14, 10.4MB/s]
[04/27 14:34:33 d2.engine.train_loop]: Starting training from iteration 0
[04/27 14:35:13 d2.utils.events]: eta: 0:08:28 iter: 19 total_loss: 2.877 loss_cls: 2.371 loss_box_reg: 0.531 time: 1.8388 data_time: 0.9325 lr: 0.000020 max_mem: 13214M
[04/27 14:35:49 d2.utils.events]: eta: 0:07:45 iter: 39 total_loss: 1.508 loss_cls: 1.191 loss_box_reg: 0.351 time: 1.8081 data_time: 0.9411 lr: 0.000040 max_mem: 13214M
[04/27 14:36:25 d2.utils.events]: eta: 0:07:09 iter: 59 total_loss: 1.257 loss_cls: 0.940 loss_box_reg: 0.321 time: 1.8022 data_time: 0.9545 lr: 0.000060 max_mem: 13214M
[04/27 14:37:00 d2.utils.events]: eta: 0:06:33 iter: 79 total_loss: 1.170 loss_cls: 0.831 loss_box_reg: 0.329 time: 1.7935 data_time: 0.8906 lr: 0.000080 max_mem: 13214M
[04/27 14:37:36 d2.utils.events]: eta: 0:05:58 iter: 99 total_loss: 1.052 loss_cls: 0.754 loss_box_reg: 0.296 time: 1.7925 data_time: 0.9542 lr: 0.000100 max_mem: 13214M
[04/27 14:38:11 d2.utils.events]: eta: 0:05:21 iter: 119 total_loss: 1.022 loss_cls: 0.709 loss_box_reg: 0.300 time: 1.7883 data_time: 0.9244 lr: 0.000120 max_mem: 13214M
[04/27 14:38:47 d2.utils.events]: eta: 0:04:46 iter: 139 total_loss: 0.970 loss_cls: 0.681 loss_box_reg: 0.299 time: 1.7866 data_time: 0.9369 lr: 0.000140 max_mem: 13214M
[04/27 14:39:22 d2.utils.events]: eta: 0:04:10 iter: 159 total_loss: 0.933 loss_cls: 0.645 loss_box_reg: 0.293 time: 1.7840 data_time: 0.9264 lr: 0.000160 max_mem: 13214M
[04/27 14:39:57 d2.utils.events]: eta: 0:03:34 iter: 179 total_loss: 0.896 loss_cls: 0.614 loss_box_reg: 0.287 time: 1.7807 data_time: 0.9199 lr: 0.000180 max_mem: 13214M
[04/27 14:40:33 d2.data.build]: Distribution of instances among all 10 categories: | category | #instances | category | #instances | category | #instances |
---|---|---|---|---|---|---|
Past | 43 | Gorgonia | 179 | SeaRods | 36 | |
Antillo | 127 | Fish | 59 | Ssid | 1 | |
Orb | 26 | Other_Coral | 10 | Apalm | 56 | |
Galaxaura | 347 | |||||
total | 884 |
[04/27 14:40:33 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/27 14:40:33 d2.data.common]: Serialized dataset takes 0.06 MiB WARNING [04/27 14:40:33 d2.evaluation.coco_evaluation]: json_file was not found in MetaDataCatalog for 'CoralReef_retinrow1val'. Trying to convert it to COCO format ... [04/27 14:40:33 d2.data.datasets.coco]: Converting annotations of dataset 'CoralReef_retinrow1val' to COCO format ...) [04/27 14:40:33 d2.data.datasets.coco]: Converting dataset dicts into COCO format [04/27 14:40:34 d2.data.datasets.coco]: Conversion finished, num images: 27, num annotations: 884 [04/27 14:40:34 d2.data.datasets.coco]: Caching COCO format annotations at 'coco_eval/CoralReef_retinrow1val_coco_format.json' ... [04/27 14:40:34 d2.evaluation.evaluator]: Start inference on 27 images [04/27 14:40:37 d2.evaluation.evaluator]: Inference done 11/27. 0.1391 s / img. ETA=0:00:03 [04/27 14:40:41 d2.evaluation.evaluator]: Total inference time: 0:00:04.552923 (0.206951 s / img per device, on 1 devices) [04/27 14:40:41 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:02 (0.132726 s / img per device, on 1 devices) [04/27 14:40:41 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/27 14:40:41 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/27 14:40:41 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=0.95s). Accumulating evaluation results... DONE (t=0.05s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.021 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.050 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.011 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.011 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.023 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.024 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.059 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.087 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.039 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.101
[04/27 14:40:42 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
2.082 | 5.043 | 1.079 | nan | 1.146 | 2.300 |
[04/27 14:40:42 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/27 14:40:42 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 0.124 | Gorgonia | 4.463 | SeaRods | 0.000 | |
Antillo | 3.445 | Fish | 2.619 | Ssid | 0.000 | |
Orb | 4.705 | Other_Coral | 0.000 | Apalm | 1.586 | |
Galaxaura | 3.877 |
[04/27 14:40:42 d2.engine.defaults]: Evaluation results for CoralReef_retinrow1val in csv format: [04/27 14:40:42 d2.evaluation.testing]: copypaste: Task: bbox [04/27 14:40:42 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/27 14:40:42 d2.evaluation.testing]: copypaste: 2.0819,5.0433,1.0790,nan,1.1461,2.3001 [04/27 14:40:42 d2.utils.events]: eta: 0:02:59 iter: 199 total_loss: 0.917 loss_cls: 0.613 loss_box_reg: 0.292 time: 1.7796 data_time: 0.9245 lr: 0.000200 max_mem: 13214M [04/27 14:41:16 d2.utils.events]: eta: 0:02:23 iter: 219 total_loss: 0.857 loss_cls: 0.577 loss_box_reg: 0.289 time: 1.7743 data_time: 0.8839 lr: 0.000220 max_mem: 13214M [04/27 14:41:52 d2.utils.events]: eta: 0:01:48 iter: 239 total_loss: 0.842 loss_cls: 0.562 loss_box_reg: 0.284 time: 1.7741 data_time: 0.9382 lr: 0.000240 max_mem: 13214M [04/27 14:42:26 d2.utils.events]: eta: 0:01:12 iter: 259 total_loss: 0.810 loss_cls: 0.528 loss_box_reg: 0.266 time: 1.7718 data_time: 0.9064 lr: 0.000260 max_mem: 13214M [04/27 14:43:01 d2.utils.events]: eta: 0:00:37 iter: 279 total_loss: 0.799 loss_cls: 0.523 loss_box_reg: 0.270 time: 1.7698 data_time: 0.9046 lr: 0.000280 max_mem: 13214M [04/27 14:43:39 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/27 14:43:39 d2.data.common]: Serialized dataset takes 0.06 MiB [04/27 14:43:39 d2.evaluation.evaluator]: Start inference on 27 images [04/27 14:43:43 d2.evaluation.evaluator]: Inference done 11/27. 0.1464 s / img. ETA=0:00:03 [04/27 14:43:46 d2.evaluation.evaluator]: Total inference time: 0:00:04.729000 (0.214955 s / img per device, on 1 devices) [04/27 14:43:46 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:02 (0.135607 s / img per device, on 1 devices) [04/27 14:43:46 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/27 14:43:46 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/27 14:43:46 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=0.99s). Accumulating evaluation results... DONE (t=0.05s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.045 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.105 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.029 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.022 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.049 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.029 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.094 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.136 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.077 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.155
[04/27 14:43:47 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
4.465 | 10.487 | 2.911 | nan | 2.195 | 4.879 |
[04/27 14:43:47 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/27 14:43:47 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 0.678 | Gorgonia | 10.452 | SeaRods | 0.000 | |
Antillo | 6.587 | Fish | 2.738 | Ssid | 0.000 | |
Orb | 8.363 | Other_Coral | 0.000 | Apalm | 7.820 | |
Galaxaura | 8.014 |
[04/27 14:43:47 d2.engine.defaults]: Evaluation results for CoralReef_retinrow1val in csv format: [04/27 14:43:47 d2.evaluation.testing]: copypaste: Task: bbox [04/27 14:43:47 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/27 14:43:47 d2.evaluation.testing]: copypaste: 4.4653,10.4871,2.9112,nan,2.1950,4.8795 [04/27 14:43:47 d2.utils.events]: eta: 0:00:01 iter: 299 total_loss: 0.831 loss_cls: 0.547 loss_box_reg: 0.281 time: 1.7710 data_time: 0.9437 lr: 0.000300 max_mem: 13214M [04/27 14:43:47 d2.engine.hooks]: Overall training speed: 297 iterations in 0:08:47 (1.7770 s / it) [04/27 14:43:47 d2.engine.hooks]: Total training time: 0:09:07 (0:00:19 on hooks)
RPN_Row1
:
YAML_FILE = RPN_Row1
RPN_Row1 = "COCO-Detection/rpn_R_50_FPN_1x.yaml""
ProposalNetwork( (backbone): FPN( (fpn_lateral2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (top_block): LastLevelMaxPool() (bottom_up): ResNet( (stem): BasicStem( (conv1): Conv2d( 3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) ) (res2): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv1): Conv2d( 64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) ) (res3): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv1): Conv2d( 256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) ) (res4): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) (conv1): Conv2d( 512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (4): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (5): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) ) (res5): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) (conv1): Conv2d( 1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) ) ) ) (proposal_generator): RPN( (anchor_generator): DefaultAnchorGenerator( (cell_anchors): BufferList() ) (rpn_head): StandardRPNHead( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (objectness_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1)) (anchor_deltas): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1)) ) ) ) [04/27 16:12:08 d2.data.build]: Removed 0 images with no usable annotations. 106 images left.
[04/27 16:12:08 d2.data.build]: Distribution of instances among all 10 categories: | category | #instances | category | #instances | category | #instances |
---|---|---|---|---|---|---|
Past | 283 | Gorgonia | 701 | SeaRods | 185 | |
Antillo | 544 | Fish | 211 | Ssid | 29 | |
Orb | 92 | Other_Coral | 48 | Apalm | 218 | |
Galaxaura | 804 | |||||
total | 3115 |
[04/27 16:12:08 d2.data.common]: Serializing 106 elements to byte tensors and concatenating them all ...
[04/27 16:12:08 d2.data.common]: Serialized dataset takes 0.22 MiB
[04/27 16:12:08 d2.data.detection_utils]: TransformGens used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()]
[04/27 16:12:08 d2.data.build]: Using training sampler TrainingSampler
model_final_02ce48.pkl: 110MB [00:11, 9.33MB/s]
[04/27 16:12:22 d2.engine.train_loop]: Starting training from iteration 0
[04/27 16:13:01 d2.utils.events]: eta: 0:08:41 iter: 19 total_loss: 1.342 loss_rpn_cls: 1.154 loss_rpn_loc: 0.212 time: 1.8831 data_time: 1.0711 lr: 0.000020 max_mem: 8398M
[04/27 16:13:38 d2.utils.events]: eta: 0:08:04 iter: 39 total_loss: 0.623 loss_rpn_cls: 0.420 loss_rpn_loc: 0.199 time: 1.8721 data_time: 0.9669 lr: 0.000040 max_mem: 8398M
[04/27 16:14:14 d2.utils.events]: eta: 0:07:25 iter: 59 total_loss: 0.509 loss_rpn_cls: 0.334 loss_rpn_loc: 0.187 time: 1.8436 data_time: 0.8871 lr: 0.000060 max_mem: 8398M
[04/27 16:14:50 d2.utils.events]: eta: 0:06:48 iter: 79 total_loss: 0.484 loss_rpn_cls: 0.296 loss_rpn_loc: 0.190 time: 1.8331 data_time: 0.9280 lr: 0.000080 max_mem: 8398M
[04/27 16:15:26 d2.utils.events]: eta: 0:06:09 iter: 99 total_loss: 0.472 loss_rpn_cls: 0.287 loss_rpn_loc: 0.182 time: 1.8305 data_time: 0.9485 lr: 0.000100 max_mem: 8398M
[04/27 16:16:03 d2.utils.events]: eta: 0:05:33 iter: 119 total_loss: 0.466 loss_rpn_cls: 0.281 loss_rpn_loc: 0.183 time: 1.8284 data_time: 0.9271 lr: 0.000120 max_mem: 8398M
[04/27 16:16:39 d2.utils.events]: eta: 0:04:55 iter: 139 total_loss: 0.463 loss_rpn_cls: 0.272 loss_rpn_loc: 0.186 time: 1.8252 data_time: 0.9251 lr: 0.000140 max_mem: 8398M
[04/27 16:17:15 d2.utils.events]: eta: 0:04:18 iter: 159 total_loss: 0.434 loss_rpn_cls: 0.265 loss_rpn_loc: 0.175 time: 1.8207 data_time: 0.9166 lr: 0.000160 max_mem: 8398M
[04/27 16:17:50 d2.utils.events]: eta: 0:03:39 iter: 179 total_loss: 0.429 loss_rpn_cls: 0.255 loss_rpn_loc: 0.181 time: 1.8164 data_time: 0.9230 lr: 0.000180 max_mem: 8398M
[04/27 16:18:26 d2.data.build]: Distribution of instances among all 10 categories: | category | #instances | category | #instances | category | #instances |
---|---|---|---|---|---|---|
Past | 43 | Gorgonia | 179 | SeaRods | 36 | |
Antillo | 127 | Fish | 59 | Ssid | 1 | |
Orb | 26 | Other_Coral | 10 | Apalm | 56 | |
Galaxaura | 347 | |||||
total | 884 |
[04/27 16:18:26 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/27 16:18:26 d2.data.common]: Serialized dataset takes 0.06 MiB WARNING [04/27 16:18:26 d2.evaluation.coco_evaluation]: json_file was not found in MetaDataCatalog for 'CoralReef_retinrow1val'. Trying to convert it to COCO format ... WARNING [04/27 16:18:26 d2.data.datasets.coco]: Using previously cached COCO format annotations at 'coco_eval/CoralReef_retinrow1val_coco_format.json'. You need to clear the cache file if your dataset has been modified. [04/27 16:18:26 d2.evaluation.evaluator]: Start inference on 27 images [04/27 16:18:30 d2.evaluation.evaluator]: Inference done 11/27. 0.0949 s / img. ETA=0:00:03 [04/27 16:18:33 d2.evaluation.evaluator]: Total inference time: 0:00:04.423983 (0.201090 s / img per device, on 1 devices) [04/27 16:18:33 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:01 (0.085627 s / img per device, on 1 devices) [04/27 16:18:33 d2.evaluation.coco_evaluation]: Evaluating bbox proposals ...
[04/27 16:18:34 d2.evaluation.coco_evaluation]: Proposal metrics: | AR@100 | ARs@100 | ARm@100 | ARl@100 | AR@1000 | ARs@1000 | ARm@1000 | ARl@1000 |
---|---|---|---|---|---|---|---|---|
15.860 | nan | 6.435 | 17.269 | 38.733 | nan | 26.174 | 40.611 |
[04/27 16:18:34 d2.engine.defaults]: Evaluation results for CoralReef_retinrow1val in csv format: [04/27 16:18:34 d2.evaluation.testing]: copypaste: Task: box_proposals [04/27 16:18:34 d2.evaluation.testing]: copypaste: AR@100,ARs@100,ARm@100,ARl@100,AR@1000,ARs@1000,ARm@1000,ARl@1000 [04/27 16:18:34 d2.evaluation.testing]: copypaste: 15.8597,nan,6.4348,17.2692,38.7330,nan,26.1739,40.6112 [04/27 16:18:34 d2.utils.events]: eta: 0:03:03 iter: 199 total_loss: 0.417 loss_rpn_cls: 0.254 loss_rpn_loc: 0.174 time: 1.8124 data_time: 0.9285 lr: 0.000200 max_mem: 8398M [04/27 16:19:08 d2.utils.events]: eta: 0:02:26 iter: 219 total_loss: 0.408 loss_rpn_cls: 0.240 loss_rpn_loc: 0.174 time: 1.8042 data_time: 0.8592 lr: 0.000220 max_mem: 8398M [04/27 16:19:44 d2.utils.events]: eta: 0:01:50 iter: 239 total_loss: 0.394 loss_rpn_cls: 0.227 loss_rpn_loc: 0.167 time: 1.8016 data_time: 0.9220 lr: 0.000240 max_mem: 8398M [04/27 16:20:20 d2.utils.events]: eta: 0:01:14 iter: 259 total_loss: 0.411 loss_rpn_cls: 0.225 loss_rpn_loc: 0.173 time: 1.7998 data_time: 0.9356 lr: 0.000260 max_mem: 8398M [04/27 16:20:55 d2.utils.events]: eta: 0:00:37 iter: 279 total_loss: 0.379 loss_rpn_cls: 0.213 loss_rpn_loc: 0.169 time: 1.7965 data_time: 0.8668 lr: 0.000280 max_mem: 8398M [04/27 16:21:32 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/27 16:21:32 d2.data.common]: Serialized dataset takes 0.06 MiB [04/27 16:21:32 d2.evaluation.evaluator]: Start inference on 27 images [04/27 16:21:35 d2.evaluation.evaluator]: Inference done 11/27. 0.0931 s / img. ETA=0:00:02 [04/27 16:21:39 d2.evaluation.evaluator]: Total inference time: 0:00:04.489641 (0.204075 s / img per device, on 1 devices) [04/27 16:21:39 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:02 (0.093324 s / img per device, on 1 devices) [04/27 16:21:39 d2.evaluation.coco_evaluation]: Evaluating bbox proposals ...
[04/27 16:21:40 d2.evaluation.coco_evaluation]: Proposal metrics: | AR@100 | ARs@100 | ARm@100 | ARl@100 | AR@1000 | ARs@1000 | ARm@1000 | ARl@1000 |
---|---|---|---|---|---|---|---|---|
20.170 | nan | 8.261 | 21.951 | 42.579 | nan | 30.783 | 44.343 |
[04/27 16:21:40 d2.engine.defaults]: Evaluation results for CoralReef_retinrow1val in csv format: [04/27 16:21:40 d2.evaluation.testing]: copypaste: Task: box_proposals [04/27 16:21:40 d2.evaluation.testing]: copypaste: AR@100,ARs@100,ARm@100,ARl@100,AR@1000,ARs@1000,ARm@1000,ARl@1000 [04/27 16:21:40 d2.evaluation.testing]: copypaste: 20.1697,nan,8.2609,21.9506,42.5792,nan,30.7826,44.3433 [04/27 16:21:40 d2.utils.events]: eta: 0:00:01 iter: 299 total_loss: 0.407 loss_rpn_cls: 0.227 loss_rpn_loc: 0.180 time: 1.7959 data_time: 0.8963 lr: 0.000300 max_mem: 8398M [04/27 16:21:40 d2.engine.hooks]: Overall training speed: 297 iterations in 0:08:55 (1.8020 s / it) [04/27 16:21:40 d2.engine.hooks]: Total training time: 0:09:12 (0:00:17 on hooks)
RetinaNet_ROW2 YAML_NAME = RetinaNet_ROW2 Apr 30 2020
"COCO-Detection/retinanet_R_50_FPN_3x.yaml"
RetinaNet( (backbone): FPN( (fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (top_block): LastLevelP6P7( (p6): Conv2d(2048, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (p7): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) ) (bottom_up): ResNet( (stem): BasicStem( (conv1): Conv2d( 3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) ) (res2): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv1): Conv2d( 64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) ) (res3): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv1): Conv2d( 256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) ) (res4): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) (conv1): Conv2d( 512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (4): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (5): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) ) (res5): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) (conv1): Conv2d( 1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) ) ) ) (head): RetinaNetHead( (cls_subnet): Sequential( (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU() (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ReLU() (4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (5): ReLU() (6): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (7): ReLU() ) (bbox_subnet): Sequential( (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU() (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ReLU() (4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (5): ReLU() (6): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (7): ReLU() ) (cls_score): Conv2d(256, 720, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (bbox_pred): Conv2d(256, 36, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (anchor_generator): DefaultAnchorGenerator( (cell_anchors): BufferList() ) ) [04/30 12:01:25 d2.data.build]: Removed 0 images with no usable annotations. 106 images left.
[04/30 12:01:25 d2.data.build]: Distribution of instances among all 10 categories: | category | #instances | category | #instances | category | #instances |
---|---|---|---|---|---|---|
Past | 283 | Gorgonia | 701 | SeaRods | 185 | |
Antillo | 544 | Fish | 211 | Ssid | 29 | |
Orb | 92 | Other_Coral | 48 | Apalm | 218 | |
Galaxaura | 804 | |||||
total | 3115 |
[04/30 12:01:25 d2.data.common]: Serializing 106 elements to byte tensors and concatenating them all ...
[04/30 12:01:25 d2.data.common]: Serialized dataset takes 0.22 MiB
[04/30 12:01:25 d2.data.detection_utils]: TransformGens used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()]
[04/30 12:01:25 d2.data.build]: Using training sampler TrainingSampler
model_final_4cafe0.pkl: 152MB [00:05, 25.5MB/s]
[04/30 12:01:39 d2.engine.train_loop]: Starting training from iteration 0
[04/30 12:02:02 d2.utils.events]: eta: 0:08:54 iter: 19 total_loss: 2.874 loss_cls: 2.373 loss_box_reg: 0.535 time: 1.1098 data_time: 0.6991 lr: 0.000020 max_mem: 7090M
[04/30 12:02:22 d2.utils.events]: eta: 0:07:49 iter: 39 total_loss: 1.455 loss_cls: 1.098 loss_box_reg: 0.361 time: 1.0538 data_time: 0.5427 lr: 0.000040 max_mem: 7090M
[04/30 12:02:40 d2.utils.events]: eta: 0:07:14 iter: 59 total_loss: 1.120 loss_cls: 0.815 loss_box_reg: 0.300 time: 1.0056 data_time: 0.4689 lr: 0.000060 max_mem: 7090M
[04/30 12:02:59 d2.utils.events]: eta: 0:06:48 iter: 79 total_loss: 1.085 loss_cls: 0.763 loss_box_reg: 0.311 time: 0.9935 data_time: 0.5035 lr: 0.000080 max_mem: 7090M
[04/30 12:03:18 d2.utils.events]: eta: 0:06:22 iter: 99 total_loss: 1.063 loss_cls: 0.732 loss_box_reg: 0.329 time: 0.9777 data_time: 0.4390 lr: 0.000100 max_mem: 7090M
[04/30 12:03:36 d2.utils.events]: eta: 0:06:01 iter: 119 total_loss: 0.922 loss_cls: 0.640 loss_box_reg: 0.292 time: 0.9672 data_time: 0.4517 lr: 0.000120 max_mem: 7090M
[04/30 12:03:54 d2.utils.events]: eta: 0:05:40 iter: 139 total_loss: 0.954 loss_cls: 0.632 loss_box_reg: 0.312 time: 0.9599 data_time: 0.4698 lr: 0.000140 max_mem: 7090M
[04/30 12:04:13 d2.utils.events]: eta: 0:05:20 iter: 159 total_loss: 0.868 loss_cls: 0.594 loss_box_reg: 0.286 time: 0.9555 data_time: 0.4686 lr: 0.000160 max_mem: 7090M
[04/30 12:04:31 d2.utils.events]: eta: 0:05:01 iter: 179 total_loss: 0.907 loss_cls: 0.601 loss_box_reg: 0.311 time: 0.9519 data_time: 0.4647 lr: 0.000180 max_mem: 7090M
[04/30 12:04:51 d2.data.build]: Distribution of instances among all 10 categories: | category | #instances | category | #instances | category | #instances |
---|---|---|---|---|---|---|
Past | 43 | Gorgonia | 179 | SeaRods | 36 | |
Antillo | 127 | Fish | 59 | Ssid | 1 | |
Orb | 26 | Other_Coral | 10 | Apalm | 56 | |
Galaxaura | 347 | |||||
total | 884 |
[04/30 12:04:51 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/30 12:04:51 d2.data.common]: Serialized dataset takes 0.06 MiB WARNING [04/30 12:04:51 d2.evaluation.coco_evaluation]: json_file was not found in MetaDataCatalog for 'CoralReef_RetinaNet2val'. Trying to convert it to COCO format ... [04/30 12:04:51 d2.data.datasets.coco]: Converting annotations of dataset 'CoralReef_RetinaNet2val' to COCO format ...) [04/30 12:04:51 d2.data.datasets.coco]: Converting dataset dicts into COCO format [04/30 12:04:51 d2.data.datasets.coco]: Conversion finished, num images: 27, num annotations: 884 [04/30 12:04:51 d2.data.datasets.coco]: Caching COCO format annotations at 'coco_eval/CoralReef_RetinaNet2val_coco_format.json' ... [04/30 12:04:51 d2.evaluation.evaluator]: Start inference on 27 images [04/30 12:04:55 d2.evaluation.evaluator]: Inference done 11/27. 0.1161 s / img. ETA=0:00:03 [04/30 12:05:00 d2.evaluation.evaluator]: Total inference time: 0:00:05.861257 (0.266421 s / img per device, on 1 devices) [04/30 12:05:00 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:02 (0.122170 s / img per device, on 1 devices) [04/30 12:05:00 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/30 12:05:00 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/30 12:05:00 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=1.10s). Accumulating evaluation results... DONE (t=0.06s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.026 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.066 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.017 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.017 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.030 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.019 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.072 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.100 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.042 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.114
[04/30 12:05:01 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
2.649 | 6.553 | 1.671 | nan | 1.720 | 3.014 |
[04/30 12:05:01 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/30 12:05:01 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 1.191 | Gorgonia | 4.498 | SeaRods | 0.000 | |
Antillo | 4.374 | Fish | 1.912 | Ssid | 0.000 | |
Orb | 3.702 | Other_Coral | 0.000 | Apalm | 5.865 | |
Galaxaura | 4.948 |
[04/30 12:05:01 d2.engine.defaults]: Evaluation results for CoralReef_RetinaNet2val in csv format: [04/30 12:05:01 d2.evaluation.testing]: copypaste: Task: bbox [04/30 12:05:01 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/30 12:05:01 d2.evaluation.testing]: copypaste: 2.6491,6.5525,1.6706,nan,1.7205,3.0142 [04/30 12:05:01 d2.utils.events]: eta: 0:04:42 iter: 199 total_loss: 0.845 loss_cls: 0.564 loss_box_reg: 0.284 time: 0.9492 data_time: 0.4729 lr: 0.000200 max_mem: 7090M [04/30 12:05:19 d2.utils.events]: eta: 0:04:23 iter: 219 total_loss: 0.863 loss_cls: 0.566 loss_box_reg: 0.288 time: 0.9446 data_time: 0.4241 lr: 0.000220 max_mem: 7090M [04/30 12:05:38 d2.utils.events]: eta: 0:04:04 iter: 239 total_loss: 0.809 loss_cls: 0.541 loss_box_reg: 0.271 time: 0.9445 data_time: 0.4858 lr: 0.000240 max_mem: 7090M [04/30 12:05:57 d2.utils.events]: eta: 0:03:45 iter: 259 total_loss: 0.776 loss_cls: 0.511 loss_box_reg: 0.266 time: 0.9453 data_time: 0.5008 lr: 0.000260 max_mem: 7090M [04/30 12:06:17 d2.utils.events]: eta: 0:03:27 iter: 279 total_loss: 0.733 loss_cls: 0.484 loss_box_reg: 0.250 time: 0.9470 data_time: 0.5110 lr: 0.000280 max_mem: 7090M [04/30 12:06:36 d2.utils.events]: eta: 0:03:08 iter: 299 total_loss: 0.715 loss_cls: 0.465 loss_box_reg: 0.253 time: 0.9473 data_time: 0.4999 lr: 0.000300 max_mem: 7090M [04/30 12:06:55 d2.utils.events]: eta: 0:02:49 iter: 319 total_loss: 0.725 loss_cls: 0.470 loss_box_reg: 0.253 time: 0.9487 data_time: 0.5064 lr: 0.000320 max_mem: 7090M [04/30 12:07:14 d2.utils.events]: eta: 0:02:31 iter: 339 total_loss: 0.724 loss_cls: 0.462 loss_box_reg: 0.251 time: 0.9493 data_time: 0.5088 lr: 0.000340 max_mem: 7090M [04/30 12:07:33 d2.utils.events]: eta: 0:02:12 iter: 359 total_loss: 0.693 loss_cls: 0.428 loss_box_reg: 0.257 time: 0.9501 data_time: 0.5127 lr: 0.000360 max_mem: 7090M [04/30 12:07:52 d2.utils.events]: eta: 0:01:54 iter: 379 total_loss: 0.694 loss_cls: 0.451 loss_box_reg: 0.249 time: 0.9499 data_time: 0.4953 lr: 0.000380 max_mem: 7090M [04/30 12:08:12 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/30 12:08:12 d2.data.common]: Serialized dataset takes 0.06 MiB [04/30 12:08:12 d2.evaluation.evaluator]: Start inference on 27 images [04/30 12:08:16 d2.evaluation.evaluator]: Inference done 11/27. 0.1393 s / img. ETA=0:00:03 [04/30 12:08:20 d2.evaluation.evaluator]: Total inference time: 0:00:04.878665 (0.221758 s / img per device, on 1 devices) [04/30 12:08:20 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:02 (0.130804 s / img per device, on 1 devices) [04/30 12:08:20 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/30 12:08:20 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/30 12:08:20 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=1.15s). Accumulating evaluation results... DONE (t=0.05s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.096 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.211 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.069 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.063 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.111 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.045 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.144 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.199 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.103 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.229
[04/30 12:08:21 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
9.637 | 21.146 | 6.927 | nan | 6.316 | 11.083 |
[04/30 12:08:21 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/30 12:08:21 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 22.376 | Gorgonia | 15.143 | SeaRods | 1.292 | |
Antillo | 11.800 | Fish | 3.602 | Ssid | 0.000 | |
Orb | 6.145 | Other_Coral | 0.227 | Apalm | 23.713 | |
Galaxaura | 12.075 |
[04/30 12:08:21 d2.engine.defaults]: Evaluation results for CoralReef_RetinaNet2val in csv format: [04/30 12:08:21 d2.evaluation.testing]: copypaste: Task: bbox [04/30 12:08:21 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/30 12:08:21 d2.evaluation.testing]: copypaste: 9.6372,21.1461,6.9265,nan,6.3160,11.0827 [04/30 12:08:21 d2.utils.events]: eta: 0:01:35 iter: 399 total_loss: 0.659 loss_cls: 0.416 loss_box_reg: 0.247 time: 0.9508 data_time: 0.5111 lr: 0.000400 max_mem: 7090M [04/30 12:08:39 d2.utils.events]: eta: 0:01:16 iter: 419 total_loss: 0.632 loss_cls: 0.400 loss_box_reg: 0.228 time: 0.9488 data_time: 0.4345 lr: 0.000420 max_mem: 7090M [04/30 12:08:59 d2.utils.events]: eta: 0:00:57 iter: 439 total_loss: 0.653 loss_cls: 0.403 loss_box_reg: 0.250 time: 0.9510 data_time: 0.5344 lr: 0.000440 max_mem: 7090M [04/30 12:09:18 d2.utils.events]: eta: 0:00:38 iter: 459 total_loss: 0.561 loss_cls: 0.363 loss_box_reg: 0.209 time: 0.9503 data_time: 0.4611 lr: 0.000460 max_mem: 7090M [04/30 12:09:37 d2.utils.events]: eta: 0:00:19 iter: 479 total_loss: 0.612 loss_cls: 0.388 loss_box_reg: 0.234 time: 0.9496 data_time: 0.4789 lr: 0.000480 max_mem: 7090M [04/30 12:09:58 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/30 12:09:58 d2.data.common]: Serialized dataset takes 0.06 MiB [04/30 12:09:58 d2.evaluation.evaluator]: Start inference on 27 images [04/30 12:10:01 d2.evaluation.evaluator]: Inference done 11/27. 0.1334 s / img. ETA=0:00:03 [04/30 12:10:05 d2.evaluation.evaluator]: Total inference time: 0:00:04.931438 (0.224156 s / img per device, on 1 devices) [04/30 12:10:05 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:02 (0.131279 s / img per device, on 1 devices) [04/30 12:10:05 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/30 12:10:05 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/30 12:10:05 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=1.74s). Accumulating evaluation results... DONE (t=0.11s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.116 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.248 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.090 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.069 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.135 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.052 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.173 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.235 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.122 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.267
[04/30 12:10:07 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
11.588 | 24.798 | 9.034 | nan | 6.864 | 13.472 |
[04/30 12:10:07 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/30 12:10:07 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 27.820 | Gorgonia | 17.972 | SeaRods | 2.723 | |
Antillo | 14.549 | Fish | 4.696 | Ssid | 0.000 | |
Orb | 6.827 | Other_Coral | 0.202 | Apalm | 29.734 | |
Galaxaura | 11.359 |
[04/30 12:10:07 d2.engine.defaults]: Evaluation results for CoralReef_RetinaNet2val in csv format: [04/30 12:10:07 d2.evaluation.testing]: copypaste: Task: bbox [04/30 12:10:07 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/30 12:10:07 d2.evaluation.testing]: copypaste: 11.5881,24.7983,9.0339,nan,6.8645,13.4723 [04/30 12:10:07 d2.utils.events]: eta: 0:00:00 iter: 499 total_loss: 0.681 loss_cls: 0.423 loss_box_reg: 0.260 time: 0.9500 data_time: 0.5130 lr: 0.000500 max_mem: 7090M [04/30 12:10:07 d2.engine.hooks]: Overall training speed: 497 iterations in 0:07:53 (0.9520 s / it) [04/30 12:10:07 d2.engine.hooks]: Total training time: 0:08:25 (0:00:31 on hooks)
RetinaNet_ROW3 YAML_NAME ="RetinaNet_ROW3"
COCO-Detection/retinanet_R_101_FPN_3x.yaml
RetinaNet(
(backbone): FPN(
(fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(top_block): LastLevelP6P7(
(p6): Conv2d(2048, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(p7): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
)
(bottom_up): ResNet(
(stem): BasicStem(
(conv1): Conv2d(
3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
)
(res2): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv1): Conv2d(
64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
)
(res3): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv1): Conv2d(
256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(3): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
)
(res4): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
(conv1): Conv2d(
512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(3): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(4): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(5): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(6): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(7): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(8): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(9): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(10): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(11): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(12): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(13): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(14): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(15): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(16): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(17): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(18): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(19): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(20): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(21): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(22): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
)
(res5): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
(conv1): Conv2d(
1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
)
)
)
(head): RetinaNetHead(
(cls_subnet): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(5): ReLU()
(6): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(7): ReLU()
)
(bbox_subnet): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(5): ReLU()
(6): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(7): ReLU()
)
(cls_score): Conv2d(256, 720, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bbox_pred): Conv2d(256, 36, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(anchor_generator): DefaultAnchorGenerator(
(cell_anchors): BufferList()
)
)
[04/30 12:25:53 d2.data.build]: Removed 0 images with no usable annotations. 106 images left.
[04/30 12:25:53 d2.data.common]: Serializing 106 elements to byte tensors and concatenating them all ...
[04/30 12:25:53 d2.data.common]: Serialized dataset takes 0.22 MiB
[04/30 12:25:53 d2.data.detection_utils]: TransformGens used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()]
[04/30 12:25:53 d2.data.build]: Using training sampler TrainingSampler
model_final_59f53c.pkl: 228MB [00:09, 24.2MB/s]
[04/30 12:26:03 d2.engine.train_loop]: Starting training from iteration 0
[04/30 12:26:25 d2.utils.events]: eta: 0:08:14 iter: 19 total_loss: 2.812 loss_cls: 2.362 loss_box_reg: 0.528 time: 1.0522 data_time: 0.2834 lr: 0.000020 max_mem: 8904M
[04/30 12:26:46 d2.utils.events]: eta: 0:07:49 iter: 39 total_loss: 1.616 loss_cls: 1.268 loss_box_reg: 0.363 time: 1.0292 data_time: 0.1781 lr: 0.000040 max_mem: 8904M
[04/30 12:27:06 d2.utils.events]: eta: 0:07:28 iter: 59 total_loss: 1.236 loss_cls: 0.872 loss_box_reg: 0.344 time: 1.0219 data_time: 0.1662 lr: 0.000060 max_mem: 8904M
[04/30 12:27:26 d2.utils.events]: eta: 0:07:08 iter: 79 total_loss: 1.146 loss_cls: 0.798 loss_box_reg: 0.327 time: 1.0219 data_time: 0.1913 lr: 0.000080 max_mem: 8904M
[04/30 12:27:47 d2.utils.events]: eta: 0:06:48 iter: 99 total_loss: 1.026 loss_cls: 0.718 loss_box_reg: 0.309 time: 1.0221 data_time: 0.2150 lr: 0.000100 max_mem: 8904M
[04/30 12:28:07 d2.utils.events]: eta: 0:06:27 iter: 119 total_loss: 0.945 loss_cls: 0.639 loss_box_reg: 0.300 time: 1.0210 data_time: 0.1893 lr: 0.000120 max_mem: 8912M
[04/30 12:28:27 d2.utils.events]: eta: 0:06:07 iter: 139 total_loss: 0.903 loss_cls: 0.620 loss_box_reg: 0.283 time: 1.0207 data_time: 0.1497 lr: 0.000140 max_mem: 8912M
[04/30 12:28:48 d2.utils.events]: eta: 0:05:46 iter: 159 total_loss: 0.859 loss_cls: 0.573 loss_box_reg: 0.281 time: 1.0195 data_time: 0.1817 lr: 0.000160 max_mem: 8912M
[04/30 12:29:08 d2.utils.events]: eta: 0:05:25 iter: 179 total_loss: 0.861 loss_cls: 0.603 loss_box_reg: 0.279 time: 1.0167 data_time: 0.1338 lr: 0.000180 max_mem: 8912M
[04/30 12:29:28 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ...
[04/30 12:29:28 d2.data.common]: Serialized dataset takes 0.06 MiB
WARNING [04/30 12:29:28 d2.evaluation.coco_evaluation]: json_file was not found in MetaDataCatalog for 'CoralReef_RetinaNet_row3val'. Trying to convert it to COCO format ...
[04/30 12:29:28 d2.data.datasets.coco]: Converting annotations of dataset 'CoralReef_RetinaNet_row3val' to COCO format ...)
[04/30 12:29:28 d2.data.datasets.coco]: Converting dataset dicts into COCO format
[04/30 12:29:28 d2.data.datasets.coco]: Conversion finished, num images: 27, num annotations: 884
[04/30 12:29:28 d2.data.datasets.coco]: Caching COCO format annotations at 'coco_eval/CoralReef_RetinaNet_row3val_coco_format.json' ...
[04/30 12:29:28 d2.evaluation.evaluator]: Start inference on 27 images
[04/30 12:29:32 d2.evaluation.evaluator]: Inference done 11/27. 0.1667 s / img. ETA=0:00:03
[04/30 12:29:36 d2.evaluation.evaluator]: Total inference time: 0:00:05.041031 (0.229138 s / img per device, on 1 devices)
[04/30 12:29:36 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:03 (0.159962 s / img per device, on 1 devices)
[04/30 12:29:36 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...
[04/30 12:29:36 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json
[04/30 12:29:36 d2.evaluation.coco_evaluation]: Evaluating predictions ...
Loading and preparing results...
DONE (t=0.00s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type bbox
DONE (t=1.00s).
Accumulating evaluation results...
DONE (t=0.05s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.037
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.087
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.023
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.028
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.041
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.024
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.089
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.121
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.050
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.137
[04/30 12:29:37 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
3.748 | 8.697 | 2.310 | nan | 2.796 | 4.128 |
[04/30 12:29:37 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/30 12:29:37 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 1.152 | Gorgonia | 11.186 | SeaRods | 0.051 | |
Antillo | 3.779 | Fish | 0.576 | Ssid | 0.000 | |
Orb | 6.685 | Other_Coral | 0.000 | Apalm | 7.121 | |
Galaxaura | 6.926 |
[04/30 12:29:37 d2.engine.defaults]: Evaluation results for CoralReef_RetinaNet_row3val in csv format: [04/30 12:29:37 d2.evaluation.testing]: copypaste: Task: bbox [04/30 12:29:37 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/30 12:29:37 d2.evaluation.testing]: copypaste: 3.7476,8.6972,2.3096,nan,2.7955,4.1278 [04/30 12:29:37 d2.utils.events]: eta: 0:05:05 iter: 199 total_loss: 0.823 loss_cls: 0.539 loss_box_reg: 0.284 time: 1.0153 data_time: 0.1888 lr: 0.000200 max_mem: 8912M [04/30 12:29:58 d2.utils.events]: eta: 0:04:45 iter: 219 total_loss: 0.839 loss_cls: 0.557 loss_box_reg: 0.275 time: 1.0167 data_time: 0.2137 lr: 0.000220 max_mem: 8912M [04/30 12:30:18 d2.utils.events]: eta: 0:04:25 iter: 239 total_loss: 0.762 loss_cls: 0.490 loss_box_reg: 0.267 time: 1.0190 data_time: 0.2287 lr: 0.000240 max_mem: 8912M [04/30 12:30:39 d2.utils.events]: eta: 0:04:05 iter: 259 total_loss: 0.761 loss_cls: 0.499 loss_box_reg: 0.259 time: 1.0204 data_time: 0.2126 lr: 0.000260 max_mem: 8912M [04/30 12:30:59 d2.utils.events]: eta: 0:03:44 iter: 279 total_loss: 0.733 loss_cls: 0.473 loss_box_reg: 0.253 time: 1.0197 data_time: 0.1892 lr: 0.000280 max_mem: 8912M [04/30 12:31:20 d2.utils.events]: eta: 0:03:24 iter: 299 total_loss: 0.733 loss_cls: 0.470 loss_box_reg: 0.265 time: 1.0201 data_time: 0.2073 lr: 0.000300 max_mem: 8913M [04/30 12:31:40 d2.utils.events]: eta: 0:03:04 iter: 319 total_loss: 0.702 loss_cls: 0.446 loss_box_reg: 0.252 time: 1.0200 data_time: 0.2088 lr: 0.000320 max_mem: 8913M [04/30 12:32:01 d2.utils.events]: eta: 0:02:43 iter: 339 total_loss: 0.649 loss_cls: 0.422 loss_box_reg: 0.231 time: 1.0193 data_time: 0.1748 lr: 0.000340 max_mem: 8913M [04/30 12:32:21 d2.utils.events]: eta: 0:02:23 iter: 359 total_loss: 0.622 loss_cls: 0.395 loss_box_reg: 0.218 time: 1.0189 data_time: 0.1999 lr: 0.000360 max_mem: 8913M [04/30 12:32:41 d2.utils.events]: eta: 0:02:02 iter: 379 total_loss: 0.663 loss_cls: 0.417 loss_box_reg: 0.242 time: 1.0188 data_time: 0.2001 lr: 0.000380 max_mem: 8913M [04/30 12:33:02 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/30 12:33:02 d2.data.common]: Serialized dataset takes 0.06 MiB [04/30 12:33:02 d2.evaluation.evaluator]: Start inference on 27 images [04/30 12:33:06 d2.evaluation.evaluator]: Inference done 11/27. 0.1768 s / img. ETA=0:00:03 [04/30 12:33:09 d2.evaluation.evaluator]: Total inference time: 0:00:04.862899 (0.221041 s / img per device, on 1 devices) [04/30 12:33:09 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:03 (0.164209 s / img per device, on 1 devices) [04/30 12:33:09 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/30 12:33:09 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/30 12:33:09 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=1.08s). Accumulating evaluation results... DONE (t=0.06s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.133 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.270 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.111 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.085 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.150 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.138 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.268 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.318 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.157 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.341
[04/30 12:33:10 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
13.282 | 26.992 | 11.136 | nan | 8.489 | 14.963 |
[04/30 12:33:10 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/30 12:33:10 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 27.821 | Gorgonia | 21.174 | SeaRods | 2.640 | |
Antillo | 13.656 | Fish | 1.958 | Ssid | 15.000 | |
Orb | 9.937 | Other_Coral | 0.000 | Apalm | 25.997 | |
Galaxaura | 14.642 |
[04/30 12:33:10 d2.engine.defaults]: Evaluation results for CoralReef_RetinaNet_row3val in csv format: [04/30 12:33:10 d2.evaluation.testing]: copypaste: Task: bbox [04/30 12:33:10 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/30 12:33:10 d2.evaluation.testing]: copypaste: 13.2825,26.9919,11.1358,nan,8.4889,14.9634 [04/30 12:33:10 d2.utils.events]: eta: 0:01:42 iter: 399 total_loss: 0.621 loss_cls: 0.391 loss_box_reg: 0.228 time: 1.0187 data_time: 0.1883 lr: 0.000400 max_mem: 8913M [04/30 12:33:30 d2.utils.events]: eta: 0:01:22 iter: 419 total_loss: 0.612 loss_cls: 0.391 loss_box_reg: 0.219 time: 1.0178 data_time: 0.1658 lr: 0.000420 max_mem: 8913M [04/30 12:33:51 d2.utils.events]: eta: 0:01:01 iter: 439 total_loss: 0.592 loss_cls: 0.379 loss_box_reg: 0.216 time: 1.0175 data_time: 0.1677 lr: 0.000440 max_mem: 8913M [04/30 12:34:11 d2.utils.events]: eta: 0:00:41 iter: 459 total_loss: 0.590 loss_cls: 0.369 loss_box_reg: 0.220 time: 1.0175 data_time: 0.1992 lr: 0.000460 max_mem: 8913M [04/30 12:34:31 d2.utils.events]: eta: 0:00:21 iter: 479 total_loss: 0.561 loss_cls: 0.360 loss_box_reg: 0.210 time: 1.0173 data_time: 0.1628 lr: 0.000480 max_mem: 8913M [04/30 12:34:53 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/30 12:34:53 d2.data.common]: Serialized dataset takes 0.06 MiB [04/30 12:34:53 d2.evaluation.evaluator]: Start inference on 27 images [04/30 12:34:57 d2.evaluation.evaluator]: Inference done 11/27. 0.1624 s / img. ETA=0:00:04 [04/30 12:35:01 d2.evaluation.evaluator]: Total inference time: 0:00:05.206379 (0.236654 s / img per device, on 1 devices) [04/30 12:35:01 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:03 (0.157919 s / img per device, on 1 devices) [04/30 12:35:01 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/30 12:35:01 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/30 12:35:01 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=1.12s). Accumulating evaluation results... DONE (t=0.06s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.153 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.308 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.131 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.095 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.173 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.142 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.282 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.332 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.157 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.356
[04/30 12:35:02 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
15.335 | 30.814 | 13.063 | nan | 9.496 | 17.298 |
[04/30 12:35:02 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/30 12:35:02 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 28.769 | Gorgonia | 23.868 | SeaRods | 3.377 | |
Antillo | 16.076 | Fish | 4.543 | Ssid | 18.000 | |
Orb | 11.893 | Other_Coral | 0.000 | Apalm | 30.423 | |
Galaxaura | 16.397 |
[04/30 12:35:02 d2.engine.defaults]: Evaluation results for CoralReef_RetinaNet_row3val in csv format: [04/30 12:35:02 d2.evaluation.testing]: copypaste: Task: bbox [04/30 12:35:02 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/30 12:35:02 d2.evaluation.testing]: copypaste: 15.3346,30.8138,13.0630,nan,9.4961,17.2978 [04/30 12:35:02 d2.utils.events]: eta: 0:00:01 iter: 499 total_loss: 0.536 loss_cls: 0.344 loss_box_reg: 0.199 time: 1.0171 data_time: 0.1918 lr: 0.000500 max_mem: 8913M [04/30 12:35:02 d2.engine.hooks]: Overall training speed: 497 iterations in 0:08:26 (1.0191 s / it) [04/30 12:35:02 d2.engine.hooks]: Total training time: 0:08:55 (0:00:28 on hooks)
YAML_NAME = Faster_Row4
Faster_Row4 = "COCO-Detection/faster_rcnn_R_50_C4_3x.yaml"
cfg.SOLVER.IMS_PER_BATCH = 4
cfg.SOLVER.BASE_LR = 0.001
# cfg.SOLVER.WARMUP_ITERS = 100
cfg.SOLVER.MAX_ITER = 1000
# cfg.SOLVER.STEPS = (500, 1000)
cfg.SOLVER.GAMMA = 0.05
# The model itself
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 64
cfg.MODEL.ROI_HEADS.NUM_CLASSES = len(classes)
cfg.TEST.EVAL_PERIOD = 300
GeneralizedRCNN(
(backbone): ResNet(
(stem): BasicStem(
(conv1): Conv2d(
3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
)
(res2): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv1): Conv2d(
64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
)
(res3): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv1): Conv2d(
256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(3): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
)
(res4): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
(conv1): Conv2d(
512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(3): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(4): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(5): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
)
)
(proposal_generator): RPN(
(anchor_generator): DefaultAnchorGenerator(
(cell_anchors): BufferList()
)
(rpn_head): StandardRPNHead(
(conv): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(objectness_logits): Conv2d(1024, 15, kernel_size=(1, 1), stride=(1, 1))
(anchor_deltas): Conv2d(1024, 60, kernel_size=(1, 1), stride=(1, 1))
)
)
(roi_heads): Res5ROIHeads(
(pooler): ROIPooler(
(level_poolers): ModuleList(
(0): ROIAlign(output_size=(14, 14), spatial_scale=0.0625, sampling_ratio=0, aligned=True)
)
)
(res5): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
(conv1): Conv2d(
1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
)
(box_predictor): FastRCNNOutputLayers(
(cls_score): Linear(in_features=2048, out_features=11, bias=True)
(bbox_pred): Linear(in_features=2048, out_features=40, bias=True)
)
)
)
[04/30 12:47:49 d2.data.build]: Removed 0 images with no usable annotations. 106 images left.
[04/30 12:47:49 d2.data.common]: Serializing 106 elements to byte tensors and concatenating them all ...
[04/30 12:47:49 d2.data.common]: Serialized dataset takes 0.22 MiB
[04/30 12:47:49 d2.data.detection_utils]: TransformGens used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()]
[04/30 12:47:49 d2.data.build]: Using training sampler TrainingSampler
model_final_f97cb7.pkl: 136MB [00:06, 22.3MB/s]
'roi_heads.box_predictor.cls_score.weight' has shape (81, 2048) in the checkpoint but (11, 2048) in the model! Skipped.
'roi_heads.box_predictor.cls_score.bias' has shape (81,) in the checkpoint but (11,) in the model! Skipped.
'roi_heads.box_predictor.bbox_pred.weight' has shape (320, 2048) in the checkpoint but (40, 2048) in the model! Skipped.
'roi_heads.box_predictor.bbox_pred.bias' has shape (320,) in the checkpoint but (40,) in the model! Skipped.
[04/30 12:47:56 d2.engine.train_loop]: Starting training from iteration 0
[04/30 12:48:16 d2.utils.events]: eta: 0:15:30 iter: 19 total_loss: 4.418 loss_cls: 2.351 loss_box_reg: 0.762 loss_rpn_cls: 1.145 loss_rpn_loc: 0.214 time: 0.9589 data_time: 0.4697 lr: 0.000020 max_mem: 8913M
[04/30 12:48:34 d2.utils.events]: eta: 0:15:12 iter: 39 total_loss: 3.376 loss_cls: 1.883 loss_box_reg: 0.782 loss_rpn_cls: 0.557 loss_rpn_loc: 0.188 time: 0.9417 data_time: 0.3990 lr: 0.000040 max_mem: 8913M
[04/30 12:48:53 d2.utils.events]: eta: 0:14:52 iter: 59 total_loss: 2.611 loss_cls: 1.154 loss_box_reg: 0.780 loss_rpn_cls: 0.477 loss_rpn_loc: 0.200 time: 0.9389 data_time: 0.4188 lr: 0.000060 max_mem: 8913M
[04/30 12:49:12 d2.utils.events]: eta: 0:14:33 iter: 79 total_loss: 2.523 loss_cls: 1.092 loss_box_reg: 0.830 loss_rpn_cls: 0.428 loss_rpn_loc: 0.191 time: 0.9446 data_time: 0.4257 lr: 0.000080 max_mem: 8913M
[04/30 12:49:31 d2.utils.events]: eta: 0:14:14 iter: 99 total_loss: 2.410 loss_cls: 1.046 loss_box_reg: 0.813 loss_rpn_cls: 0.380 loss_rpn_loc: 0.185 time: 0.9472 data_time: 0.4196 lr: 0.000100 max_mem: 8913M
[04/30 12:49:50 d2.utils.events]: eta: 0:13:55 iter: 119 total_loss: 2.409 loss_cls: 1.006 loss_box_reg: 0.829 loss_rpn_cls: 0.381 loss_rpn_loc: 0.201 time: 0.9463 data_time: 0.4018 lr: 0.000120 max_mem: 8913M
[04/30 12:50:09 d2.utils.events]: eta: 0:13:35 iter: 139 total_loss: 2.303 loss_cls: 0.982 loss_box_reg: 0.828 loss_rpn_cls: 0.315 loss_rpn_loc: 0.163 time: 0.9479 data_time: 0.4161 lr: 0.000140 max_mem: 8913M
[04/30 12:50:28 d2.utils.events]: eta: 0:13:16 iter: 159 total_loss: 2.281 loss_cls: 0.944 loss_box_reg: 0.837 loss_rpn_cls: 0.318 loss_rpn_loc: 0.181 time: 0.9447 data_time: 0.3823 lr: 0.000160 max_mem: 8913M
[04/30 12:50:47 d2.utils.events]: eta: 0:12:59 iter: 179 total_loss: 2.228 loss_cls: 0.912 loss_box_reg: 0.819 loss_rpn_cls: 0.302 loss_rpn_loc: 0.189 time: 0.9465 data_time: 0.4248 lr: 0.000180 max_mem: 8913M
[04/30 12:51:06 d2.utils.events]: eta: 0:12:38 iter: 199 total_loss: 2.190 loss_cls: 0.892 loss_box_reg: 0.852 loss_rpn_cls: 0.299 loss_rpn_loc: 0.177 time: 0.9439 data_time: 0.3832 lr: 0.000200 max_mem: 8913M
[04/30 12:51:25 d2.utils.events]: eta: 0:12:20 iter: 219 total_loss: 2.171 loss_cls: 0.855 loss_box_reg: 0.841 loss_rpn_cls: 0.295 loss_rpn_loc: 0.185 time: 0.9455 data_time: 0.4264 lr: 0.000220 max_mem: 8913M
[04/30 12:51:44 d2.utils.events]: eta: 0:12:01 iter: 239 total_loss: 2.111 loss_cls: 0.813 loss_box_reg: 0.858 loss_rpn_cls: 0.270 loss_rpn_loc: 0.170 time: 0.9454 data_time: 0.4116 lr: 0.000240 max_mem: 8913M
[04/30 12:52:02 d2.utils.events]: eta: 0:11:41 iter: 259 total_loss: 2.034 loss_cls: 0.788 loss_box_reg: 0.855 loss_rpn_cls: 0.248 loss_rpn_loc: 0.167 time: 0.9426 data_time: 0.3786 lr: 0.000260 max_mem: 8913M
[04/30 12:52:21 d2.utils.events]: eta: 0:11:21 iter: 279 total_loss: 2.017 loss_cls: 0.711 loss_box_reg: 0.823 loss_rpn_cls: 0.240 loss_rpn_loc: 0.179 time: 0.9432 data_time: 0.4116 lr: 0.000280 max_mem: 8913M
[04/30 12:52:40 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ...
[04/30 12:52:40 d2.data.common]: Serialized dataset takes 0.06 MiB
WARNING [04/30 12:52:40 d2.evaluation.coco_evaluation]: json_file was not found in MetaDataCatalog for 'CoralReef_RCNN_ROW4val'. Trying to convert it to COCO format ...
[04/30 12:52:40 d2.data.datasets.coco]: Converting annotations of dataset 'CoralReef_RCNN_ROW4val' to COCO format ...)
[04/30 12:52:40 d2.data.datasets.coco]: Converting dataset dicts into COCO format
[04/30 12:52:40 d2.data.datasets.coco]: Conversion finished, num images: 27, num annotations: 884
[04/30 12:52:40 d2.data.datasets.coco]: Caching COCO format annotations at 'coco_eval/CoralReef_RCNN_ROW4val_coco_format.json' ...
[04/30 12:52:40 d2.evaluation.evaluator]: Start inference on 27 images
[04/30 12:52:45 d2.evaluation.evaluator]: Inference done 11/27. 0.2499 s / img. ETA=0:00:04
[04/30 12:52:49 d2.evaluation.evaluator]: Total inference time: 0:00:05.615958 (0.255271 s / img per device, on 1 devices)
[04/30 12:52:49 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:05 (0.245283 s / img per device, on 1 devices)
[04/30 12:52:49 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...
[04/30 12:52:49 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json
[04/30 12:52:49 d2.evaluation.coco_evaluation]: Evaluating predictions ...
Loading and preparing results...
DONE (t=0.00s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type bbox
DONE (t=1.19s).
Accumulating evaluation results...
DONE (t=0.05s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.033
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.088
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.015
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.005
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.037
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.012
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.051
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.095
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.031
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.104
[04/30 12:52:50 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
3.291 | 8.833 | 1.485 | nan | 0.476 | 3.650 |
[04/30 12:52:50 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/30 12:52:50 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 0.000 | Gorgonia | 10.578 | SeaRods | 0.000 | |
Antillo | 5.575 | Fish | 0.000 | Ssid | 0.000 | |
Orb | 0.000 | Other_Coral | 0.000 | Apalm | 10.046 | |
Galaxaura | 6.707 |
[04/30 12:52:50 d2.engine.defaults]: Evaluation results for CoralReef_RCNN_ROW4val in csv format: [04/30 12:52:50 d2.evaluation.testing]: copypaste: Task: bbox [04/30 12:52:50 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/30 12:52:50 d2.evaluation.testing]: copypaste: 3.2906,8.8330,1.4853,nan,0.4763,3.6504 [04/30 12:52:50 d2.utils.events]: eta: 0:11:01 iter: 299 total_loss: 1.909 loss_cls: 0.705 loss_box_reg: 0.834 loss_rpn_cls: 0.207 loss_rpn_loc: 0.151 time: 0.9401 data_time: 0.3599 lr: 0.000300 max_mem: 8913M [04/30 12:53:09 d2.utils.events]: eta: 0:10:42 iter: 319 total_loss: 1.921 loss_cls: 0.653 loss_box_reg: 0.859 loss_rpn_cls: 0.229 loss_rpn_loc: 0.167 time: 0.9387 data_time: 0.3777 lr: 0.000320 max_mem: 8913M [04/30 12:53:27 d2.utils.events]: eta: 0:10:24 iter: 339 total_loss: 1.869 loss_cls: 0.624 loss_box_reg: 0.829 loss_rpn_cls: 0.221 loss_rpn_loc: 0.161 time: 0.9381 data_time: 0.3829 lr: 0.000340 max_mem: 8913M [04/30 12:53:47 d2.utils.events]: eta: 0:10:05 iter: 359 total_loss: 1.785 loss_cls: 0.621 loss_box_reg: 0.812 loss_rpn_cls: 0.219 loss_rpn_loc: 0.158 time: 0.9414 data_time: 0.4445 lr: 0.000360 max_mem: 8913M [04/30 12:54:08 d2.utils.events]: eta: 0:09:47 iter: 379 total_loss: 1.714 loss_cls: 0.586 loss_box_reg: 0.778 loss_rpn_cls: 0.193 loss_rpn_loc: 0.160 time: 0.9457 data_time: 0.4689 lr: 0.000380 max_mem: 8913M [04/30 12:54:27 d2.utils.events]: eta: 0:09:29 iter: 399 total_loss: 1.703 loss_cls: 0.591 loss_box_reg: 0.760 loss_rpn_cls: 0.209 loss_rpn_loc: 0.145 time: 0.9470 data_time: 0.4297 lr: 0.000400 max_mem: 8913M [04/30 12:54:46 d2.utils.events]: eta: 0:09:11 iter: 419 total_loss: 1.631 loss_cls: 0.566 loss_box_reg: 0.720 loss_rpn_cls: 0.193 loss_rpn_loc: 0.153 time: 0.9477 data_time: 0.4045 lr: 0.000420 max_mem: 8913M [04/30 12:55:05 d2.utils.events]: eta: 0:08:53 iter: 439 total_loss: 1.572 loss_cls: 0.537 loss_box_reg: 0.716 loss_rpn_cls: 0.199 loss_rpn_loc: 0.157 time: 0.9474 data_time: 0.3919 lr: 0.000440 max_mem: 8913M [04/30 12:55:25 d2.utils.events]: eta: 0:08:34 iter: 459 total_loss: 1.597 loss_cls: 0.555 loss_box_reg: 0.712 loss_rpn_cls: 0.179 loss_rpn_loc: 0.150 time: 0.9486 data_time: 0.4370 lr: 0.000460 max_mem: 8913M [04/30 12:55:45 d2.utils.events]: eta: 0:08:16 iter: 479 total_loss: 1.503 loss_cls: 0.502 loss_box_reg: 0.676 loss_rpn_cls: 0.163 loss_rpn_loc: 0.140 time: 0.9505 data_time: 0.4497 lr: 0.000480 max_mem: 8913M [04/30 12:56:06 d2.utils.events]: eta: 0:07:58 iter: 499 total_loss: 1.502 loss_cls: 0.507 loss_box_reg: 0.678 loss_rpn_cls: 0.165 loss_rpn_loc: 0.148 time: 0.9541 data_time: 0.4806 lr: 0.000500 max_mem: 8913M [04/30 12:56:25 d2.utils.events]: eta: 0:07:40 iter: 519 total_loss: 1.462 loss_cls: 0.499 loss_box_reg: 0.632 loss_rpn_cls: 0.177 loss_rpn_loc: 0.146 time: 0.9555 data_time: 0.4369 lr: 0.000519 max_mem: 8913M [04/30 12:56:45 d2.utils.events]: eta: 0:07:22 iter: 539 total_loss: 1.479 loss_cls: 0.486 loss_box_reg: 0.663 loss_rpn_cls: 0.174 loss_rpn_loc: 0.149 time: 0.9570 data_time: 0.4510 lr: 0.000539 max_mem: 8913M [04/30 12:57:06 d2.utils.events]: eta: 0:07:03 iter: 559 total_loss: 1.437 loss_cls: 0.487 loss_box_reg: 0.631 loss_rpn_cls: 0.157 loss_rpn_loc: 0.151 time: 0.9590 data_time: 0.4659 lr: 0.000559 max_mem: 8913M [04/30 12:57:26 d2.utils.events]: eta: 0:06:45 iter: 579 total_loss: 1.401 loss_cls: 0.447 loss_box_reg: 0.641 loss_rpn_cls: 0.155 loss_rpn_loc: 0.146 time: 0.9619 data_time: 0.4871 lr: 0.000579 max_mem: 8913M [04/30 12:57:46 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/30 12:57:46 d2.data.common]: Serialized dataset takes 0.06 MiB [04/30 12:57:46 d2.evaluation.evaluator]: Start inference on 27 images [04/30 12:57:52 d2.evaluation.evaluator]: Inference done 11/27. 0.2473 s / img. ETA=0:00:04 [04/30 12:57:56 d2.evaluation.evaluator]: Total inference time: 0:00:05.811757 (0.264171 s / img per device, on 1 devices) [04/30 12:57:56 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:05 (0.239992 s / img per device, on 1 devices) [04/30 12:57:56 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/30 12:57:56 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/30 12:57:56 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=1.12s). Accumulating evaluation results... DONE (t=0.05s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.161 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.349 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.119 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.070 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.178 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.129 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.251 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.297 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.154 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.315
[04/30 12:57:57 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
16.094 | 34.902 | 11.873 | nan | 7.027 | 17.836 |
[04/30 12:57:57 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/30 12:57:57 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 27.430 | Gorgonia | 21.381 | SeaRods | 6.909 | |
Antillo | 15.462 | Fish | 6.953 | Ssid | 40.000 | |
Orb | 13.539 | Other_Coral | 0.000 | Apalm | 14.339 | |
Galaxaura | 14.926 |
[04/30 12:57:57 d2.engine.defaults]: Evaluation results for CoralReef_RCNN_ROW4val in csv format: [04/30 12:57:57 d2.evaluation.testing]: copypaste: Task: bbox [04/30 12:57:57 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/30 12:57:57 d2.evaluation.testing]: copypaste: 16.0939,34.9015,11.8727,nan,7.0271,17.8358 [04/30 12:57:57 d2.utils.events]: eta: 0:06:25 iter: 599 total_loss: 1.405 loss_cls: 0.459 loss_box_reg: 0.650 loss_rpn_cls: 0.149 loss_rpn_loc: 0.142 time: 0.9619 data_time: 0.4052 lr: 0.000599 max_mem: 8913M [04/30 12:58:16 d2.utils.events]: eta: 0:06:06 iter: 619 total_loss: 1.383 loss_cls: 0.449 loss_box_reg: 0.623 loss_rpn_cls: 0.160 loss_rpn_loc: 0.147 time: 0.9604 data_time: 0.3707 lr: 0.000619 max_mem: 8913M [04/30 12:58:35 d2.utils.events]: eta: 0:05:47 iter: 639 total_loss: 1.405 loss_cls: 0.459 loss_box_reg: 0.629 loss_rpn_cls: 0.162 loss_rpn_loc: 0.151 time: 0.9607 data_time: 0.4434 lr: 0.000639 max_mem: 8913M [04/30 12:58:55 d2.utils.events]: eta: 0:05:28 iter: 659 total_loss: 1.386 loss_cls: 0.465 loss_box_reg: 0.603 loss_rpn_cls: 0.151 loss_rpn_loc: 0.138 time: 0.9616 data_time: 0.4408 lr: 0.000659 max_mem: 8913M [04/30 12:59:13 d2.utils.events]: eta: 0:05:08 iter: 679 total_loss: 1.350 loss_cls: 0.417 loss_box_reg: 0.613 loss_rpn_cls: 0.144 loss_rpn_loc: 0.149 time: 0.9605 data_time: 0.3721 lr: 0.000679 max_mem: 8913M [04/30 12:59:33 d2.utils.events]: eta: 0:04:49 iter: 699 total_loss: 1.313 loss_cls: 0.444 loss_box_reg: 0.573 loss_rpn_cls: 0.149 loss_rpn_loc: 0.142 time: 0.9610 data_time: 0.4368 lr: 0.000699 max_mem: 8913M [04/30 12:59:52 d2.utils.events]: eta: 0:04:30 iter: 719 total_loss: 1.324 loss_cls: 0.448 loss_box_reg: 0.594 loss_rpn_cls: 0.141 loss_rpn_loc: 0.135 time: 0.9614 data_time: 0.4429 lr: 0.000719 max_mem: 8913M [04/30 13:00:12 d2.utils.events]: eta: 0:04:11 iter: 739 total_loss: 1.237 loss_cls: 0.416 loss_box_reg: 0.589 loss_rpn_cls: 0.140 loss_rpn_loc: 0.143 time: 0.9617 data_time: 0.4388 lr: 0.000739 max_mem: 8913M [04/30 13:00:32 d2.utils.events]: eta: 0:03:52 iter: 759 total_loss: 1.228 loss_cls: 0.397 loss_box_reg: 0.570 loss_rpn_cls: 0.142 loss_rpn_loc: 0.125 time: 0.9624 data_time: 0.4351 lr: 0.000759 max_mem: 8913M [04/30 13:00:51 d2.utils.events]: eta: 0:03:33 iter: 779 total_loss: 1.256 loss_cls: 0.406 loss_box_reg: 0.579 loss_rpn_cls: 0.128 loss_rpn_loc: 0.139 time: 0.9626 data_time: 0.4308 lr: 0.000779 max_mem: 8913M [04/30 13:01:10 d2.utils.events]: eta: 0:03:14 iter: 799 total_loss: 1.240 loss_cls: 0.410 loss_box_reg: 0.569 loss_rpn_cls: 0.122 loss_rpn_loc: 0.136 time: 0.9621 data_time: 0.3891 lr: 0.000799 max_mem: 8913M [04/30 13:01:30 d2.utils.events]: eta: 0:02:54 iter: 819 total_loss: 1.251 loss_cls: 0.409 loss_box_reg: 0.568 loss_rpn_cls: 0.127 loss_rpn_loc: 0.140 time: 0.9624 data_time: 0.4375 lr: 0.000819 max_mem: 8913M [04/30 13:01:49 d2.utils.events]: eta: 0:02:35 iter: 839 total_loss: 1.249 loss_cls: 0.427 loss_box_reg: 0.563 loss_rpn_cls: 0.124 loss_rpn_loc: 0.129 time: 0.9628 data_time: 0.4373 lr: 0.000839 max_mem: 8913M [04/30 13:02:08 d2.utils.events]: eta: 0:02:16 iter: 859 total_loss: 1.156 loss_cls: 0.378 loss_box_reg: 0.565 loss_rpn_cls: 0.115 loss_rpn_loc: 0.141 time: 0.9630 data_time: 0.4319 lr: 0.000859 max_mem: 8913M [04/30 13:02:29 d2.utils.events]: eta: 0:01:56 iter: 879 total_loss: 1.246 loss_cls: 0.396 loss_box_reg: 0.559 loss_rpn_cls: 0.118 loss_rpn_loc: 0.138 time: 0.9638 data_time: 0.4428 lr: 0.000879 max_mem: 8913M [04/30 13:02:49 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/30 13:02:49 d2.data.common]: Serialized dataset takes 0.06 MiB [04/30 13:02:49 d2.evaluation.evaluator]: Start inference on 27 images [04/30 13:02:53 d2.evaluation.evaluator]: Inference done 11/27. 0.2482 s / img. ETA=0:00:04 [04/30 13:02:57 d2.evaluation.evaluator]: Total inference time: 0:00:05.905283 (0.268422 s / img per device, on 1 devices) [04/30 13:02:57 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:05 (0.245926 s / img per device, on 1 devices) [04/30 13:02:57 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/30 13:02:57 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/30 13:02:57 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=1.14s). Accumulating evaluation results... DONE (t=0.06s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.225 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.422 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.230 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.081 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.248 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.134 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.282 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.338 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.183 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.362
[04/30 13:02:59 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
22.473 | 42.191 | 22.978 | nan | 8.127 | 24.776 |
[04/30 13:02:59 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/30 13:02:59 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 29.880 | Gorgonia | 24.868 | SeaRods | 8.146 | |
Antillo | 14.859 | Fish | 7.362 | Ssid | 70.000 | |
Orb | 19.115 | Other_Coral | 0.000 | Apalm | 32.773 | |
Galaxaura | 17.728 |
[04/30 13:02:59 d2.engine.defaults]: Evaluation results for CoralReef_RCNN_ROW4val in csv format: [04/30 13:02:59 d2.evaluation.testing]: copypaste: Task: bbox [04/30 13:02:59 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/30 13:02:59 d2.evaluation.testing]: copypaste: 22.4732,42.1913,22.9775,nan,8.1272,24.7760 [04/30 13:02:59 d2.utils.events]: eta: 0:01:37 iter: 899 total_loss: 1.180 loss_cls: 0.366 loss_box_reg: 0.529 loss_rpn_cls: 0.120 loss_rpn_loc: 0.144 time: 0.9638 data_time: 0.4103 lr: 0.000899 max_mem: 8913M [04/30 13:03:17 d2.utils.events]: eta: 0:01:18 iter: 919 total_loss: 1.211 loss_cls: 0.403 loss_box_reg: 0.589 loss_rpn_cls: 0.112 loss_rpn_loc: 0.136 time: 0.9631 data_time: 0.3809 lr: 0.000919 max_mem: 8913M [04/30 13:03:37 d2.utils.events]: eta: 0:00:59 iter: 939 total_loss: 1.106 loss_cls: 0.371 loss_box_reg: 0.512 loss_rpn_cls: 0.098 loss_rpn_loc: 0.110 time: 0.9636 data_time: 0.4421 lr: 0.000939 max_mem: 8913M [04/30 13:03:57 d2.utils.events]: eta: 0:00:39 iter: 959 total_loss: 1.195 loss_cls: 0.398 loss_box_reg: 0.557 loss_rpn_cls: 0.124 loss_rpn_loc: 0.139 time: 0.9645 data_time: 0.4532 lr: 0.000959 max_mem: 8913M [04/30 13:04:17 d2.utils.events]: eta: 0:00:20 iter: 979 total_loss: 1.175 loss_cls: 0.388 loss_box_reg: 0.545 loss_rpn_cls: 0.107 loss_rpn_loc: 0.146 time: 0.9649 data_time: 0.4345 lr: 0.000979 max_mem: 8913M [04/30 13:04:38 d2.data.common]: Serializing 27 elements to byte tensors and concatenating them all ... [04/30 13:04:38 d2.data.common]: Serialized dataset takes 0.06 MiB [04/30 13:04:38 d2.evaluation.evaluator]: Start inference on 27 images [04/30 13:04:43 d2.evaluation.evaluator]: Inference done 11/27. 0.2437 s / img. ETA=0:00:04 [04/30 13:04:47 d2.evaluation.evaluator]: Total inference time: 0:00:05.847406 (0.265791 s / img per device, on 1 devices) [04/30 13:04:47 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:05 (0.242351 s / img per device, on 1 devices) [04/30 13:04:47 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/30 13:04:47 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json [04/30 13:04:47 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.01s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=1.57s). Accumulating evaluation results... DONE (t=0.06s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.204 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.401 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.175 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.085 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.269 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.149 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.304 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.362 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.176 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.388
[04/30 13:04:49 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|---|
20.441 | 40.101 | 17.539 | nan | 8.469 | 26.871 |
[04/30 13:04:49 d2.evaluation.coco_evaluation]: Note that some metrics cannot be computed.
[04/30 13:04:49 d2.evaluation.coco_evaluation]: Per-category bbox AP: | category | AP | category | AP | category | AP |
---|---|---|---|---|---|---|
Past | 28.373 | Gorgonia | 23.806 | SeaRods | 11.007 | |
Antillo | 13.601 | Fish | 8.615 | Ssid | 40.000 | |
Orb | 24.310 | Other_Coral | 0.000 | Apalm | 37.524 | |
Galaxaura | 17.177 |
[04/30 13:04:49 d2.engine.defaults]: Evaluation results for CoralReef_RCNN_ROW4val in csv format: [04/30 13:04:49 d2.evaluation.testing]: copypaste: Task: bbox [04/30 13:04:49 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/30 13:04:49 d2.evaluation.testing]: copypaste: 20.4414,40.1012,17.5394,nan,8.4687,26.8713 [04/30 13:04:49 d2.utils.events]: eta: 0:00:00 iter: 999 total_loss: 1.143 loss_cls: 0.356 loss_box_reg: 0.550 loss_rpn_cls: 0.107 loss_rpn_loc: 0.128 time: 0.9654 data_time: 0.4392 lr: 0.000999 max_mem: 8913M [04/30 13:04:49 d2.engine.hooks]: Overall training speed: 997 iterations in 0:16:03 (0.9664 s / it) [04/30 13:04:49 d2.engine.hooks]: Total training time: 0:16:50 (0:00:47 on hooks)
Maybe Future work:
Detection Tasks:
Faster RCNN:
[ ] *
FASTER: Row1
Adding the model. - https://github.com/m-kashani/MS_Project/issues/7#issuecomment-596801121[x] *
FASTER: Row6
- https://github.com/m-kashani/MS_Project/issues/7#issuecomment-619892606[x] *
FASTER: Row7
- https://github.com/m-kashani/MS_Project/issues/7#issuecomment-619892704[ ] *
FASTER: Row8
- link[ ] *
FASTER: Row9
- link[ ] *
FASTER: Row10
- linkRetinanet:
Retinanet: Row1
:- https://github.com/m-kashani/MS_Project/issues/7#issuecomment-620036687Retinanet: Row2
RPN & Fast:
FAST:
- https://github.com/m-kashani/MS_Project/issues/7#issuecomment-619899278RPN:
- https://github.com/m-kashani/MS_Project/issues/7#issuecomment-620109161Setting & Configuration:
Documentation for detectron2.model_zoo.model_zoo: https://detectron2.readthedocs.io/_modules/detectron2/model_zoo/model_zoo.html
Source code for detectron2.model_zoo.model_zoo: https://github.com/facebookresearch/detectron2/blob/master/detectron2/model_zoo/model_zoo.py
Experience On pre-trained Models:
Is left to do?
Segmentation Task:
Image 1:
https://coralunique.s3.amazonaws.com/preTest1.jpg
4 ||
Image 2: https://coralunique.s3.amazonaws.com/PreTest2.jpg
4 ||
Image 3: https://coralunique.s3.amazonaws.com/PreTest3.jpg
4 ||
Image 4:
4 ||