Closed deepaksinghcv closed 4 years ago
I later executed the code on a single GPU as following:
python main.py --num-gpus 1 --config-file training_configs/faster_rcnn_R_50_FPN.yaml --resume --eval-only
It throws the following error:
[10/10 10:56:44 d2.evaluation.evaluator]: Start inference on 4952 images
[10/10 10:56:45 d2.evaluation.evaluator]: Inference done 11/4952. 0.0779 s / img. ETA=0:06:34
[10/10 10:56:50 d2.evaluation.evaluator]: Inference done 72/4952. 0.0798 s / img. ETA=0:06:40
[10/10 10:56:55 d2.evaluation.evaluator]: Inference done 136/4952. 0.0782 s / img. ETA=0:06:28
[10/10 10:57:00 d2.evaluation.evaluator]: Inference done 200/4952. 0.0774 s / img. ETA=0:06:19
[10/10 10:57:05 d2.evaluation.evaluator]: Inference done 264/4952. 0.0771 s / img. ETA=0:06:12
[10/10 10:57:10 d2.evaluation.evaluator]: Inference done 328/4952. 0.0769 s / img. ETA=0:06:06
[10/10 10:57:16 d2.evaluation.evaluator]: Inference done 392/4952. 0.0769 s / img. ETA=0:06:01
[10/10 10:57:21 d2.evaluation.evaluator]: Inference done 456/4952. 0.0767 s / img. ETA=0:05:55
[10/10 10:57:26 d2.evaluation.evaluator]: Inference done 519/4952. 0.0768 s / img. ETA=0:05:50
[10/10 10:57:31 d2.evaluation.evaluator]: Inference done 583/4952. 0.0768 s / img. ETA=0:05:45
[10/10 10:57:36 d2.evaluation.evaluator]: Inference done 647/4952. 0.0767 s / img. ETA=0:05:40
[10/10 10:57:41 d2.evaluation.evaluator]: Inference done 710/4952. 0.0768 s / img. ETA=0:05:35
[10/10 10:57:46 d2.evaluation.evaluator]: Inference done 774/4952. 0.0767 s / img. ETA=0:05:30
[10/10 10:57:51 d2.evaluation.evaluator]: Inference done 838/4952. 0.0767 s / img. ETA=0:05:25
[10/10 10:57:56 d2.evaluation.evaluator]: Inference done 902/4952. 0.0767 s / img. ETA=0:05:20
[10/10 10:58:01 d2.evaluation.evaluator]: Inference done 966/4952. 0.0767 s / img. ETA=0:05:15
[10/10 10:58:06 d2.evaluation.evaluator]: Inference done 1029/4952. 0.0767 s / img. ETA=0:05:10
[10/10 10:58:11 d2.evaluation.evaluator]: Inference done 1093/4952. 0.0767 s / img. ETA=0:05:05
[10/10 10:58:16 d2.evaluation.evaluator]: Inference done 1156/4952. 0.0768 s / img. ETA=0:05:00
[10/10 10:58:21 d2.evaluation.evaluator]: Inference done 1220/4952. 0.0768 s / img. ETA=0:04:55
[10/10 10:58:26 d2.evaluation.evaluator]: Inference done 1283/4952. 0.0768 s / img. ETA=0:04:50
[10/10 10:58:31 d2.evaluation.evaluator]: Inference done 1347/4952. 0.0768 s / img. ETA=0:04:45
[10/10 10:58:36 d2.evaluation.evaluator]: Inference done 1411/4952. 0.0768 s / img. ETA=0:04:40
[10/10 10:58:41 d2.evaluation.evaluator]: Inference done 1474/4952. 0.0768 s / img. ETA=0:04:35
[10/10 10:58:46 d2.evaluation.evaluator]: Inference done 1537/4952. 0.0769 s / img. ETA=0:04:30
[10/10 10:58:51 d2.evaluation.evaluator]: Inference done 1600/4952. 0.0769 s / img. ETA=0:04:25
[10/10 10:58:56 d2.evaluation.evaluator]: Inference done 1663/4952. 0.0769 s / img. ETA=0:04:20
[10/10 10:59:01 d2.evaluation.evaluator]: Inference done 1727/4952. 0.0769 s / img. ETA=0:04:15
[10/10 10:59:06 d2.evaluation.evaluator]: Inference done 1791/4952. 0.0769 s / img. ETA=0:04:10
[10/10 10:59:12 d2.evaluation.evaluator]: Inference done 1855/4952. 0.0769 s / img. ETA=0:04:05
[10/10 10:59:17 d2.evaluation.evaluator]: Inference done 1919/4952. 0.0769 s / img. ETA=0:04:00
[10/10 10:59:22 d2.evaluation.evaluator]: Inference done 1983/4952. 0.0769 s / img. ETA=0:03:55
[10/10 10:59:27 d2.evaluation.evaluator]: Inference done 2047/4952. 0.0769 s / img. ETA=0:03:50
[10/10 10:59:32 d2.evaluation.evaluator]: Inference done 2110/4952. 0.0769 s / img. ETA=0:03:45
[10/10 10:59:37 d2.evaluation.evaluator]: Inference done 2174/4952. 0.0769 s / img. ETA=0:03:40
[10/10 10:59:42 d2.evaluation.evaluator]: Inference done 2238/4952. 0.0769 s / img. ETA=0:03:35
[10/10 10:59:47 d2.evaluation.evaluator]: Inference done 2301/4952. 0.0769 s / img. ETA=0:03:30
[10/10 10:59:52 d2.evaluation.evaluator]: Inference done 2364/4952. 0.0769 s / img. ETA=0:03:25
[10/10 10:59:57 d2.evaluation.evaluator]: Inference done 2427/4952. 0.0769 s / img. ETA=0:03:20
[10/10 11:00:02 d2.evaluation.evaluator]: Inference done 2490/4952. 0.0769 s / img. ETA=0:03:15
[10/10 11:00:07 d2.evaluation.evaluator]: Inference done 2553/4952. 0.0770 s / img. ETA=0:03:10
[10/10 11:00:12 d2.evaluation.evaluator]: Inference done 2616/4952. 0.0770 s / img. ETA=0:03:05
[10/10 11:00:17 d2.evaluation.evaluator]: Inference done 2678/4952. 0.0770 s / img. ETA=0:03:00
[10/10 11:00:22 d2.evaluation.evaluator]: Inference done 2741/4952. 0.0770 s / img. ETA=0:02:55
[10/10 11:00:27 d2.evaluation.evaluator]: Inference done 2804/4952. 0.0770 s / img. ETA=0:02:50
[10/10 11:00:32 d2.evaluation.evaluator]: Inference done 2867/4952. 0.0770 s / img. ETA=0:02:45
[10/10 11:00:37 d2.evaluation.evaluator]: Inference done 2926/4952. 0.0771 s / img. ETA=0:02:41
[10/10 11:00:42 d2.evaluation.evaluator]: Inference done 2989/4952. 0.0772 s / img. ETA=0:02:36
[10/10 11:00:47 d2.evaluation.evaluator]: Inference done 3053/4952. 0.0772 s / img. ETA=0:02:31
[10/10 11:00:52 d2.evaluation.evaluator]: Inference done 3116/4952. 0.0772 s / img. ETA=0:02:26
[10/10 11:00:57 d2.evaluation.evaluator]: Inference done 3179/4952. 0.0772 s / img. ETA=0:02:21
[10/10 11:01:02 d2.evaluation.evaluator]: Inference done 3242/4952. 0.0772 s / img. ETA=0:02:16
[10/10 11:01:08 d2.evaluation.evaluator]: Inference done 3305/4952. 0.0772 s / img. ETA=0:02:11
[10/10 11:01:13 d2.evaluation.evaluator]: Inference done 3368/4952. 0.0772 s / img. ETA=0:02:06
[10/10 11:01:18 d2.evaluation.evaluator]: Inference done 3431/4952. 0.0772 s / img. ETA=0:02:01
[10/10 11:01:23 d2.evaluation.evaluator]: Inference done 3494/4952. 0.0772 s / img. ETA=0:01:56
[10/10 11:01:28 d2.evaluation.evaluator]: Inference done 3557/4952. 0.0772 s / img. ETA=0:01:51
[10/10 11:01:33 d2.evaluation.evaluator]: Inference done 3619/4952. 0.0773 s / img. ETA=0:01:46
[10/10 11:01:38 d2.evaluation.evaluator]: Inference done 3682/4952. 0.0773 s / img. ETA=0:01:41
[10/10 11:01:43 d2.evaluation.evaluator]: Inference done 3745/4952. 0.0773 s / img. ETA=0:01:36
[10/10 11:01:48 d2.evaluation.evaluator]: Inference done 3808/4952. 0.0773 s / img. ETA=0:01:31
[10/10 11:01:53 d2.evaluation.evaluator]: Inference done 3872/4952. 0.0773 s / img. ETA=0:01:26
[10/10 11:01:58 d2.evaluation.evaluator]: Inference done 3935/4952. 0.0773 s / img. ETA=0:01:21
[10/10 11:02:03 d2.evaluation.evaluator]: Inference done 3998/4952. 0.0773 s / img. ETA=0:01:16
[10/10 11:02:08 d2.evaluation.evaluator]: Inference done 4061/4952. 0.0773 s / img. ETA=0:01:10
[10/10 11:02:13 d2.evaluation.evaluator]: Inference done 4125/4952. 0.0773 s / img. ETA=0:01:05
[10/10 11:02:18 d2.evaluation.evaluator]: Inference done 4188/4952. 0.0773 s / img. ETA=0:01:00
[10/10 11:02:23 d2.evaluation.evaluator]: Inference done 4251/4952. 0.0773 s / img. ETA=0:00:55
[10/10 11:02:28 d2.evaluation.evaluator]: Inference done 4313/4952. 0.0773 s / img. ETA=0:00:50
[10/10 11:02:33 d2.evaluation.evaluator]: Inference done 4375/4952. 0.0773 s / img. ETA=0:00:46
[10/10 11:02:38 d2.evaluation.evaluator]: Inference done 4439/4952. 0.0773 s / img. ETA=0:00:40
[10/10 11:02:43 d2.evaluation.evaluator]: Inference done 4501/4952. 0.0774 s / img. ETA=0:00:35
[10/10 11:02:48 d2.evaluation.evaluator]: Inference done 4564/4952. 0.0774 s / img. ETA=0:00:30
[10/10 11:02:54 d2.evaluation.evaluator]: Inference done 4627/4952. 0.0774 s / img. ETA=0:00:25
[10/10 11:02:59 d2.evaluation.evaluator]: Inference done 4690/4952. 0.0774 s / img. ETA=0:00:20
[10/10 11:03:04 d2.evaluation.evaluator]: Inference done 4753/4952. 0.0774 s / img. ETA=0:00:15
[10/10 11:03:09 d2.evaluation.evaluator]: Inference done 4816/4952. 0.0774 s / img. ETA=0:00:10
[10/10 11:03:14 d2.evaluation.evaluator]: Inference done 4879/4952. 0.0774 s / img. ETA=0:00:05
[10/10 11:03:19 d2.evaluation.evaluator]: Inference done 4942/4952. 0.0774 s / img. ETA=0:00:00
[10/10 11:03:19 d2.evaluation.evaluator]: Total inference time: 0:06:34.603606 (0.079766 s / img per device, on 1 devices)
[10/10 11:03:19 d2.evaluation.evaluator]: Total inference pure compute time: 0:06:22 (0.077365 s / img per device, on 1 devices)
[10/10 11:03:20 detectron2]: Image level evaluation complete for custom_voc_2007_test
[10/10 11:03:20 detectron2]: Results for custom_voc_2007_test
Traceback (most recent call last):
File "main.py", line 198, in <module>
args=(args,),
File "/home/dksingh/objdet/detectron2/detectron2/engine/launch.py", line 62, in launch
main_func(*args)
File "main.py", line 177, in main
return do_test(cfg, model)
File "main.py", line 63, in do_test
evaluator._coco_api.cats)
File "/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py", line 55, in only_mAP_analysis
PR_plotter(Precision, Recall, categories[cls_no+1]['name'], ap)
File "/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py", line 17, in PR_plotter
plt.savefig(f"PR/{cls_name}_Precision_recall.pdf", bbox_inches="tight")
File "/home/dksingh/.local/lib/python3.7/site-packages/matplotlib/pyplot.py", line 722, in savefig
res = fig.savefig(*args, **kwargs)
File "/home/dksingh/.local/lib/python3.7/site-packages/matplotlib/figure.py", line 2180, in savefig
self.canvas.print_figure(fname, **kwargs)
File "/home/dksingh/.local/lib/python3.7/site-packages/matplotlib/backend_bases.py", line 2082, in print_figure
**kwargs)
File "/home/dksingh/.local/lib/python3.7/site-packages/matplotlib/backends/backend_pdf.py", line 2496, in print_pdf
file = PdfFile(filename, metadata=metadata)
File "/home/dksingh/.local/lib/python3.7/site-packages/matplotlib/backends/backend_pdf.py", line 432, in __init__
fh, opened = cbook.to_filehandle(filename, "wb", return_opened=True)
File "/home/dksingh/.local/lib/python3.7/site-packages/matplotlib/cbook/__init__.py", line 432, in to_filehandle
fh = open(fname, flag, encoding=encoding)
FileNotFoundError: [Errno 2] No such file or directory: 'PR/aeroplane_Precision_recall.pdf'
Are there any missing files in the repo?
I touched PR/aeroplane_Precision_recall.pdf just to check what happens with the following code:
python main.py --num-gpus 1 --config-file training_configs/faster_rcnn_R_50_FPN.yaml --resume --eval-only
Log:
[10/10 11:15:08 d2.data.common]: Serializing 4952 elements to byte tensors and concatenating them all ...
[10/10 11:15:08 d2.data.common]: Serialized dataset takes 1.87 MiB
[10/10 11:15:08 d2.data.dataset_mapper]: Augmentations used in training: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[10/10 11:15:08 d2.evaluation.evaluator]: Start inference on 4952 images
[10/10 11:15:09 d2.evaluation.evaluator]: Inference done 11/4952. 0.0750 s / img. ETA=0:06:19
[10/10 11:15:14 d2.evaluation.evaluator]: Inference done 73/4952. 0.0785 s / img. ETA=0:06:34
[10/10 11:15:19 d2.evaluation.evaluator]: Inference done 137/4952. 0.0773 s / img. ETA=0:06:23
[10/10 11:15:24 d2.evaluation.evaluator]: Inference done 202/4952. 0.0767 s / img. ETA=0:06:15
[10/10 11:15:29 d2.evaluation.evaluator]: Inference done 266/4952. 0.0766 s / img. ETA=0:06:10
[10/10 11:15:34 d2.evaluation.evaluator]: Inference done 330/4952. 0.0765 s / img. ETA=0:06:04
[10/10 11:15:39 d2.evaluation.evaluator]: Inference done 394/4952. 0.0766 s / img. ETA=0:05:59
[10/10 11:15:44 d2.evaluation.evaluator]: Inference done 458/4952. 0.0766 s / img. ETA=0:05:54
[10/10 11:15:50 d2.evaluation.evaluator]: Inference done 522/4952. 0.0765 s / img. ETA=0:05:49
[10/10 11:15:55 d2.evaluation.evaluator]: Inference done 586/4952. 0.0765 s / img. ETA=0:05:44
[10/10 11:16:00 d2.evaluation.evaluator]: Inference done 650/4952. 0.0765 s / img. ETA=0:05:39
[10/10 11:16:05 d2.evaluation.evaluator]: Inference done 713/4952. 0.0766 s / img. ETA=0:05:34
[10/10 11:16:10 d2.evaluation.evaluator]: Inference done 777/4952. 0.0766 s / img. ETA=0:05:29
[10/10 11:16:15 d2.evaluation.evaluator]: Inference done 841/4952. 0.0765 s / img. ETA=0:05:24
[10/10 11:16:20 d2.evaluation.evaluator]: Inference done 905/4952. 0.0765 s / img. ETA=0:05:19
[10/10 11:16:25 d2.evaluation.evaluator]: Inference done 969/4952. 0.0765 s / img. ETA=0:05:14
[10/10 11:16:30 d2.evaluation.evaluator]: Inference done 1033/4952. 0.0766 s / img. ETA=0:05:09
[10/10 11:16:35 d2.evaluation.evaluator]: Inference done 1096/4952. 0.0766 s / img. ETA=0:05:04
[10/10 11:16:40 d2.evaluation.evaluator]: Inference done 1159/4952. 0.0767 s / img. ETA=0:04:59
[10/10 11:16:45 d2.evaluation.evaluator]: Inference done 1223/4952. 0.0766 s / img. ETA=0:04:54
[10/10 11:16:50 d2.evaluation.evaluator]: Inference done 1287/4952. 0.0767 s / img. ETA=0:04:49
[10/10 11:16:55 d2.evaluation.evaluator]: Inference done 1351/4952. 0.0767 s / img. ETA=0:04:44
[10/10 11:17:00 d2.evaluation.evaluator]: Inference done 1415/4952. 0.0767 s / img. ETA=0:04:39
[10/10 11:17:05 d2.evaluation.evaluator]: Inference done 1478/4952. 0.0767 s / img. ETA=0:04:34
[10/10 11:17:10 d2.evaluation.evaluator]: Inference done 1542/4952. 0.0767 s / img. ETA=0:04:29
[10/10 11:17:15 d2.evaluation.evaluator]: Inference done 1605/4952. 0.0768 s / img. ETA=0:04:24
[10/10 11:17:20 d2.evaluation.evaluator]: Inference done 1669/4952. 0.0767 s / img. ETA=0:04:19
[10/10 11:17:25 d2.evaluation.evaluator]: Inference done 1732/4952. 0.0767 s / img. ETA=0:04:14
[10/10 11:17:30 d2.evaluation.evaluator]: Inference done 1796/4952. 0.0767 s / img. ETA=0:04:09
[10/10 11:17:35 d2.evaluation.evaluator]: Inference done 1860/4952. 0.0767 s / img. ETA=0:04:04
[10/10 11:17:40 d2.evaluation.evaluator]: Inference done 1924/4952. 0.0767 s / img. ETA=0:03:59
[10/10 11:17:46 d2.evaluation.evaluator]: Inference done 1988/4952. 0.0767 s / img. ETA=0:03:54
[10/10 11:17:51 d2.evaluation.evaluator]: Inference done 2052/4952. 0.0767 s / img. ETA=0:03:49
[10/10 11:17:56 d2.evaluation.evaluator]: Inference done 2115/4952. 0.0767 s / img. ETA=0:03:44
[10/10 11:18:01 d2.evaluation.evaluator]: Inference done 2179/4952. 0.0767 s / img. ETA=0:03:39
[10/10 11:18:06 d2.evaluation.evaluator]: Inference done 2242/4952. 0.0768 s / img. ETA=0:03:34
[10/10 11:18:11 d2.evaluation.evaluator]: Inference done 2305/4952. 0.0768 s / img. ETA=0:03:29
[10/10 11:18:16 d2.evaluation.evaluator]: Inference done 2369/4952. 0.0768 s / img. ETA=0:03:24
[10/10 11:18:21 d2.evaluation.evaluator]: Inference done 2432/4952. 0.0768 s / img. ETA=0:03:19
[10/10 11:18:26 d2.evaluation.evaluator]: Inference done 2495/4952. 0.0768 s / img. ETA=0:03:14
[10/10 11:18:31 d2.evaluation.evaluator]: Inference done 2558/4952. 0.0769 s / img. ETA=0:03:09
[10/10 11:18:36 d2.evaluation.evaluator]: Inference done 2622/4952. 0.0769 s / img. ETA=0:03:04
[10/10 11:18:41 d2.evaluation.evaluator]: Inference done 2684/4952. 0.0769 s / img. ETA=0:02:59
[10/10 11:18:46 d2.evaluation.evaluator]: Inference done 2746/4952. 0.0769 s / img. ETA=0:02:54
[10/10 11:18:51 d2.evaluation.evaluator]: Inference done 2810/4952. 0.0769 s / img. ETA=0:02:49
[10/10 11:18:56 d2.evaluation.evaluator]: Inference done 2873/4952. 0.0770 s / img. ETA=0:02:44
[10/10 11:19:01 d2.evaluation.evaluator]: Inference done 2936/4952. 0.0770 s / img. ETA=0:02:39
[10/10 11:19:06 d2.evaluation.evaluator]: Inference done 2998/4952. 0.0770 s / img. ETA=0:02:35
[10/10 11:19:11 d2.evaluation.evaluator]: Inference done 3061/4952. 0.0770 s / img. ETA=0:02:30
[10/10 11:19:16 d2.evaluation.evaluator]: Inference done 3124/4952. 0.0770 s / img. ETA=0:02:25
[10/10 11:19:22 d2.evaluation.evaluator]: Inference done 3185/4952. 0.0771 s / img. ETA=0:02:20
[10/10 11:19:27 d2.evaluation.evaluator]: Inference done 3248/4952. 0.0771 s / img. ETA=0:02:15
[10/10 11:19:32 d2.evaluation.evaluator]: Inference done 3311/4952. 0.0772 s / img. ETA=0:02:10
[10/10 11:19:37 d2.evaluation.evaluator]: Inference done 3374/4952. 0.0772 s / img. ETA=0:02:05
[10/10 11:19:42 d2.evaluation.evaluator]: Inference done 3438/4952. 0.0772 s / img. ETA=0:02:00
[10/10 11:19:47 d2.evaluation.evaluator]: Inference done 3500/4952. 0.0772 s / img. ETA=0:01:55
[10/10 11:19:52 d2.evaluation.evaluator]: Inference done 3562/4952. 0.0772 s / img. ETA=0:01:50
[10/10 11:19:57 d2.evaluation.evaluator]: Inference done 3625/4952. 0.0772 s / img. ETA=0:01:45
[10/10 11:20:02 d2.evaluation.evaluator]: Inference done 3688/4952. 0.0772 s / img. ETA=0:01:40
[10/10 11:20:07 d2.evaluation.evaluator]: Inference done 3751/4952. 0.0772 s / img. ETA=0:01:35
[10/10 11:20:12 d2.evaluation.evaluator]: Inference done 3813/4952. 0.0773 s / img. ETA=0:01:30
[10/10 11:20:17 d2.evaluation.evaluator]: Inference done 3876/4952. 0.0773 s / img. ETA=0:01:25
[10/10 11:20:22 d2.evaluation.evaluator]: Inference done 3938/4952. 0.0773 s / img. ETA=0:01:20
[10/10 11:20:27 d2.evaluation.evaluator]: Inference done 4001/4952. 0.0773 s / img. ETA=0:01:15
[10/10 11:20:32 d2.evaluation.evaluator]: Inference done 4064/4952. 0.0773 s / img. ETA=0:01:10
[10/10 11:20:37 d2.evaluation.evaluator]: Inference done 4128/4952. 0.0773 s / img. ETA=0:01:05
[10/10 11:20:42 d2.evaluation.evaluator]: Inference done 4190/4952. 0.0773 s / img. ETA=0:01:00
[10/10 11:20:47 d2.evaluation.evaluator]: Inference done 4253/4952. 0.0773 s / img. ETA=0:00:55
[10/10 11:20:52 d2.evaluation.evaluator]: Inference done 4315/4952. 0.0773 s / img. ETA=0:00:50
[10/10 11:20:57 d2.evaluation.evaluator]: Inference done 4377/4952. 0.0773 s / img. ETA=0:00:45
[10/10 11:21:02 d2.evaluation.evaluator]: Inference done 4440/4952. 0.0774 s / img. ETA=0:00:40
[10/10 11:21:07 d2.evaluation.evaluator]: Inference done 4502/4952. 0.0774 s / img. ETA=0:00:35
[10/10 11:21:12 d2.evaluation.evaluator]: Inference done 4564/4952. 0.0774 s / img. ETA=0:00:30
[10/10 11:21:17 d2.evaluation.evaluator]: Inference done 4627/4952. 0.0774 s / img. ETA=0:00:25
[10/10 11:21:22 d2.evaluation.evaluator]: Inference done 4689/4952. 0.0774 s / img. ETA=0:00:20
[10/10 11:21:27 d2.evaluation.evaluator]: Inference done 4753/4952. 0.0774 s / img. ETA=0:00:15
[10/10 11:21:33 d2.evaluation.evaluator]: Inference done 4816/4952. 0.0774 s / img. ETA=0:00:10
[10/10 11:21:38 d2.evaluation.evaluator]: Inference done 4879/4952. 0.0774 s / img. ETA=0:00:05
[10/10 11:21:43 d2.evaluation.evaluator]: Inference done 4942/4952. 0.0774 s / img. ETA=0:00:00
[10/10 11:21:43 d2.evaluation.evaluator]: Total inference time: 0:06:34.775224 (0.079801 s / img per device, on 1 devices)
[10/10 11:21:43 d2.evaluation.evaluator]: Total inference pure compute time: 0:06:22 (0.077401 s / img per device, on 1 devices)
[10/10 11:21:44 detectron2]: Image level evaluation complete for custom_voc_2007_test
[10/10 11:21:44 detectron2]: Results for custom_voc_2007_test
[10/10 11:21:44 detectron2]: AP for 0: 2.896947080444079e-06
[10/10 11:21:44 detectron2]: AP for 1: 0.0
[10/10 11:21:44 detectron2]: AP for 2: 7.036307215457782e-05
[10/10 11:21:44 detectron2]: AP for 3: 0.0
[10/10 11:21:45 detectron2]: AP for 4: 0.0
[10/10 11:21:45 detectron2]: AP for 5: 0.0
[10/10 11:21:45 detectron2]: AP for 6: 0.0
[10/10 11:21:45 detectron2]: AP for 7: 0.0
[10/10 11:21:45 detectron2]: AP for 8: 0.0
[10/10 11:21:46 detectron2]: AP for 9: 0.00013395248970482498
[10/10 11:21:46 detectron2]: AP for 10: 0.0
[10/10 11:21:46 detectron2]: AP for 11: 0.0
[10/10 11:21:46 detectron2]: AP for 12: 0.0
[10/10 11:21:47 detectron2]: AP for 13: 6.641517302341526e-06
[10/10 11:21:47 detectron2]: AP for 15: 0.0
[10/10 11:21:47 detectron2]: AP for 16: 0.0
[10/10 11:21:47 detectron2]: AP for 17: 0.0
[10/10 11:21:47 detectron2]: AP for 18: 0.0
[10/10 11:21:47 detectron2]: AP for 19: 0.0
[10/10 11:21:47 detectron2]: mAP: 1.125547532865312e-05
WARNING [10/10 11:21:48 d2.data.datasets.coco]:
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.
[10/10 11:21:48 d2.data.datasets.coco]: Loaded 4952 images in COCO format from protocol/custom_protocols/WR1_Mixed_Unknowns.json
[10/10 11:21:48 d2.data.build]: Distribution of instances among all 21 categories:
| category | #instances | category | #instances | category | #instances |
|:----------:|:-------------|:-----------:|:-------------|:-----------:|:-------------|
| unknown | 15235 | aeroplane | 0 | bicycle | 0 |
| bird | 0 | boat | 0 | bottle | 0 |
| bus | 0 | car | 0 | cat | 0 |
| chair | 0 | cow | 0 | diningtable | 0 |
| dog | 0 | horse | 0 | motorbike | 0 |
| person | 0 | pottedplant | 0 | sheep | 0 |
| sofa | 0 | train | 0 | tvmonitor | 0 |
| | | | | | |
| total | 15235 | | | | |
[10/10 11:21:48 d2.data.common]: Serializing 4952 elements to byte tensors and concatenating them all ...
[10/10 11:21:48 d2.data.common]: Serialized dataset takes 8.39 MiB
[10/10 11:21:48 d2.data.dataset_mapper]: Augmentations used in training: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[10/10 11:21:48 d2.evaluation.evaluator]: Start inference on 4952 images
[10/10 11:21:50 d2.evaluation.evaluator]: Inference done 11/4952. 0.0778 s / img. ETA=0:06:34
[10/10 11:21:55 d2.evaluation.evaluator]: Inference done 74/4952. 0.0781 s / img. ETA=0:06:32
[10/10 11:22:00 d2.evaluation.evaluator]: Inference done 137/4952. 0.0781 s / img. ETA=0:06:27
[10/10 11:22:05 d2.evaluation.evaluator]: Inference done 199/4952. 0.0783 s / img. ETA=0:06:23
[10/10 11:22:10 d2.evaluation.evaluator]: Inference done 261/4952. 0.0787 s / img. ETA=0:06:19
[10/10 11:22:15 d2.evaluation.evaluator]: Inference done 322/4952. 0.0789 s / img. ETA=0:06:16
[10/10 11:22:20 d2.evaluation.evaluator]: Inference done 384/4952. 0.0790 s / img. ETA=0:06:11
[10/10 11:22:25 d2.evaluation.evaluator]: Inference done 445/4952. 0.0791 s / img. ETA=0:06:07
[10/10 11:22:30 d2.evaluation.evaluator]: Inference done 506/4952. 0.0792 s / img. ETA=0:06:02
[10/10 11:22:35 d2.evaluation.evaluator]: Inference done 567/4952. 0.0793 s / img. ETA=0:05:58
[10/10 11:22:40 d2.evaluation.evaluator]: Inference done 628/4952. 0.0795 s / img. ETA=0:05:53
[10/10 11:22:45 d2.evaluation.evaluator]: Inference done 690/4952. 0.0794 s / img. ETA=0:05:48
[10/10 11:22:50 d2.evaluation.evaluator]: Inference done 752/4952. 0.0794 s / img. ETA=0:05:43
[10/10 11:22:55 d2.evaluation.evaluator]: Inference done 814/4952. 0.0794 s / img. ETA=0:05:38
[10/10 11:23:00 d2.evaluation.evaluator]: Inference done 875/4952. 0.0794 s / img. ETA=0:05:33
[10/10 11:23:05 d2.evaluation.evaluator]: Inference done 937/4952. 0.0794 s / img. ETA=0:05:28
[10/10 11:23:10 d2.evaluation.evaluator]: Inference done 999/4952. 0.0794 s / img. ETA=0:05:23
[10/10 11:23:15 d2.evaluation.evaluator]: Inference done 1061/4952. 0.0794 s / img. ETA=0:05:17
[10/10 11:23:20 d2.evaluation.evaluator]: Inference done 1123/4952. 0.0794 s / img. ETA=0:05:12
[10/10 11:23:26 d2.evaluation.evaluator]: Inference done 1185/4952. 0.0794 s / img. ETA=0:05:07
[10/10 11:23:31 d2.evaluation.evaluator]: Inference done 1246/4952. 0.0794 s / img. ETA=0:05:02
[10/10 11:23:36 d2.evaluation.evaluator]: Inference done 1307/4952. 0.0795 s / img. ETA=0:04:58
[10/10 11:23:41 d2.evaluation.evaluator]: Inference done 1369/4952. 0.0795 s / img. ETA=0:04:53
[10/10 11:23:46 d2.evaluation.evaluator]: Inference done 1431/4952. 0.0794 s / img. ETA=0:04:47
[10/10 11:23:51 d2.evaluation.evaluator]: Inference done 1493/4952. 0.0794 s / img. ETA=0:04:42
[10/10 11:23:56 d2.evaluation.evaluator]: Inference done 1555/4952. 0.0794 s / img. ETA=0:04:37
[10/10 11:24:01 d2.evaluation.evaluator]: Inference done 1616/4952. 0.0794 s / img. ETA=0:04:32
[10/10 11:24:06 d2.evaluation.evaluator]: Inference done 1679/4952. 0.0793 s / img. ETA=0:04:27
[10/10 11:24:11 d2.evaluation.evaluator]: Inference done 1741/4952. 0.0793 s / img. ETA=0:04:22
[10/10 11:24:16 d2.evaluation.evaluator]: Inference done 1803/4952. 0.0793 s / img. ETA=0:04:17
[10/10 11:24:21 d2.evaluation.evaluator]: Inference done 1865/4952. 0.0793 s / img. ETA=0:04:12
[10/10 11:24:26 d2.evaluation.evaluator]: Inference done 1928/4952. 0.0793 s / img. ETA=0:04:06
[10/10 11:24:31 d2.evaluation.evaluator]: Inference done 1989/4952. 0.0793 s / img. ETA=0:04:01
[10/10 11:24:36 d2.evaluation.evaluator]: Inference done 2051/4952. 0.0793 s / img. ETA=0:03:56
[10/10 11:24:41 d2.evaluation.evaluator]: Inference done 2111/4952. 0.0794 s / img. ETA=0:03:52
[10/10 11:24:46 d2.evaluation.evaluator]: Inference done 2173/4952. 0.0794 s / img. ETA=0:03:47
[10/10 11:24:51 d2.evaluation.evaluator]: Inference done 2233/4952. 0.0794 s / img. ETA=0:03:42
[10/10 11:24:56 d2.evaluation.evaluator]: Inference done 2295/4952. 0.0794 s / img. ETA=0:03:37
[10/10 11:25:01 d2.evaluation.evaluator]: Inference done 2356/4952. 0.0795 s / img. ETA=0:03:32
[10/10 11:25:06 d2.evaluation.evaluator]: Inference done 2418/4952. 0.0795 s / img. ETA=0:03:27
[10/10 11:25:12 d2.evaluation.evaluator]: Inference done 2479/4952. 0.0795 s / img. ETA=0:03:22
[10/10 11:25:17 d2.evaluation.evaluator]: Inference done 2541/4952. 0.0795 s / img. ETA=0:03:17
[10/10 11:25:22 d2.evaluation.evaluator]: Inference done 2601/4952. 0.0795 s / img. ETA=0:03:12
[10/10 11:25:27 d2.evaluation.evaluator]: Inference done 2663/4952. 0.0795 s / img. ETA=0:03:07
[10/10 11:25:32 d2.evaluation.evaluator]: Inference done 2725/4952. 0.0795 s / img. ETA=0:03:02
[10/10 11:25:37 d2.evaluation.evaluator]: Inference done 2787/4952. 0.0795 s / img. ETA=0:02:57
[10/10 11:25:42 d2.evaluation.evaluator]: Inference done 2848/4952. 0.0795 s / img. ETA=0:02:52
[10/10 11:25:47 d2.evaluation.evaluator]: Inference done 2909/4952. 0.0795 s / img. ETA=0:02:47
[10/10 11:25:52 d2.evaluation.evaluator]: Inference done 2972/4952. 0.0794 s / img. ETA=0:02:41
[10/10 11:25:57 d2.evaluation.evaluator]: Inference done 3033/4952. 0.0795 s / img. ETA=0:02:36
[10/10 11:26:02 d2.evaluation.evaluator]: Inference done 3094/4952. 0.0795 s / img. ETA=0:02:31
[10/10 11:26:07 d2.evaluation.evaluator]: Inference done 3156/4952. 0.0795 s / img. ETA=0:02:26
[10/10 11:26:12 d2.evaluation.evaluator]: Inference done 3217/4952. 0.0795 s / img. ETA=0:02:21
[10/10 11:26:17 d2.evaluation.evaluator]: Inference done 3278/4952. 0.0795 s / img. ETA=0:02:16
[10/10 11:26:22 d2.evaluation.evaluator]: Inference done 3340/4952. 0.0795 s / img. ETA=0:02:11
[10/10 11:26:27 d2.evaluation.evaluator]: Inference done 3402/4952. 0.0795 s / img. ETA=0:02:06
[10/10 11:26:32 d2.evaluation.evaluator]: Inference done 3465/4952. 0.0795 s / img. ETA=0:02:01
[10/10 11:26:37 d2.evaluation.evaluator]: Inference done 3527/4952. 0.0795 s / img. ETA=0:01:56
[10/10 11:26:42 d2.evaluation.evaluator]: Inference done 3588/4952. 0.0795 s / img. ETA=0:01:51
[10/10 11:26:47 d2.evaluation.evaluator]: Inference done 3649/4952. 0.0795 s / img. ETA=0:01:46
[10/10 11:26:52 d2.evaluation.evaluator]: Inference done 3711/4952. 0.0795 s / img. ETA=0:01:41
[10/10 11:26:57 d2.evaluation.evaluator]: Inference done 3772/4952. 0.0795 s / img. ETA=0:01:36
[10/10 11:27:02 d2.evaluation.evaluator]: Inference done 3833/4952. 0.0795 s / img. ETA=0:01:31
[10/10 11:27:07 d2.evaluation.evaluator]: Inference done 3894/4952. 0.0795 s / img. ETA=0:01:26
[10/10 11:27:12 d2.evaluation.evaluator]: Inference done 3956/4952. 0.0795 s / img. ETA=0:01:21
[10/10 11:27:17 d2.evaluation.evaluator]: Inference done 4017/4952. 0.0795 s / img. ETA=0:01:16
[10/10 11:27:22 d2.evaluation.evaluator]: Inference done 4079/4952. 0.0795 s / img. ETA=0:01:11
[10/10 11:27:27 d2.evaluation.evaluator]: Inference done 4141/4952. 0.0795 s / img. ETA=0:01:06
[10/10 11:27:33 d2.evaluation.evaluator]: Inference done 4202/4952. 0.0795 s / img. ETA=0:01:01
[10/10 11:27:38 d2.evaluation.evaluator]: Inference done 4263/4952. 0.0795 s / img. ETA=0:00:56
[10/10 11:27:43 d2.evaluation.evaluator]: Inference done 4325/4952. 0.0795 s / img. ETA=0:00:51
[10/10 11:27:48 d2.evaluation.evaluator]: Inference done 4387/4952. 0.0795 s / img. ETA=0:00:46
[10/10 11:27:53 d2.evaluation.evaluator]: Inference done 4449/4952. 0.0795 s / img. ETA=0:00:41
[10/10 11:27:58 d2.evaluation.evaluator]: Inference done 4511/4952. 0.0795 s / img. ETA=0:00:36
[10/10 11:28:03 d2.evaluation.evaluator]: Inference done 4572/4952. 0.0795 s / img. ETA=0:00:31
[10/10 11:28:08 d2.evaluation.evaluator]: Inference done 4634/4952. 0.0795 s / img. ETA=0:00:26
[10/10 11:28:13 d2.evaluation.evaluator]: Inference done 4696/4952. 0.0795 s / img. ETA=0:00:20
[10/10 11:28:18 d2.evaluation.evaluator]: Inference done 4758/4952. 0.0795 s / img. ETA=0:00:15
[10/10 11:28:23 d2.evaluation.evaluator]: Inference done 4820/4952. 0.0795 s / img. ETA=0:00:10
[10/10 11:28:28 d2.evaluation.evaluator]: Inference done 4881/4952. 0.0795 s / img. ETA=0:00:05
[10/10 11:28:33 d2.evaluation.evaluator]: Inference done 4942/4952. 0.0795 s / img. ETA=0:00:00
[10/10 11:28:34 d2.evaluation.evaluator]: Total inference time: 0:06:44.715277 (0.081810 s / img per device, on 1 devices)
[10/10 11:28:34 d2.evaluation.evaluator]: Total inference pure compute time: 0:06:33 (0.079462 s / img per device, on 1 devices)
[10/10 11:28:34 detectron2]: Image level evaluation complete for WR1_Mixed_Unknowns
[10/10 11:28:34 detectron2]: Results for WR1_Mixed_Unknowns
[10/10 11:28:34 detectron2]: AP for 0: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.subplots()
[10/10 11:28:34 detectron2]: AP for 1: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.subplots()
[10/10 11:28:35 detectron2]: AP for 2: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.subplots()
[10/10 11:28:35 detectron2]: AP for 3: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.subplots()
[10/10 11:28:35 detectron2]: AP for 4: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.subplots()
[10/10 11:28:35 detectron2]: AP for 5: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.subplots()
[10/10 11:28:35 detectron2]: AP for 6: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.subplots()
[10/10 11:28:36 detectron2]: AP for 7: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.subplots()
[10/10 11:28:36 detectron2]: AP for 8: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.subplots()
[10/10 11:28:36 detectron2]: AP for 9: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.subplots()
[10/10 11:28:36 detectron2]: AP for 10: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.subplots()
[10/10 11:28:36 detectron2]: AP for 11: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.subplots()
[10/10 11:28:37 detectron2]: AP for 12: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.subplots()
[10/10 11:28:37 detectron2]: AP for 13: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.subplots()
[10/10 11:28:37 detectron2]: AP for 15: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.subplots()
[10/10 11:28:37 detectron2]: AP for 16: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.subplots()
[10/10 11:28:37 detectron2]: AP for 17: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.subplots()
[10/10 11:28:38 detectron2]: AP for 18: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.subplots()
[10/10 11:28:38 detectron2]: AP for 19: 0.0
[10/10 11:28:38 detectron2]: mAP: 0.0
[10/10 11:28:38 detectron2]: Combined results for datasets custom_voc_2007_test, WR1_Mixed_Unknowns
[10/10 11:28:38 detectron2]: AP for 0: 1.778590444700967e-06
[10/10 11:28:38 detectron2]: AP for 1: 0.0
[10/10 11:28:38 detectron2]: AP for 2: 4.170141983195208e-05
[10/10 11:28:38 detectron2]: AP for 3: 0.0
[10/10 11:28:38 detectron2]: AP for 4: 0.0
[10/10 11:28:38 detectron2]: AP for 5: 0.0
[10/10 11:28:38 detectron2]: AP for 6: 0.0
[10/10 11:28:38 detectron2]: AP for 7: 0.0
[10/10 11:28:38 detectron2]: AP for 8: 0.0
[10/10 11:28:38 detectron2]: AP for 9: 5.68300238228403e-05
[10/10 11:28:38 detectron2]: AP for 10: 0.0
[10/10 11:28:38 detectron2]: AP for 11: 0.0
[10/10 11:28:38 detectron2]: AP for 12: 0.0
[10/10 11:28:38 detectron2]: AP for 13: 3.277775022070273e-06
[10/10 11:28:38 detectron2]: AP for 15: 0.0
[10/10 11:28:38 detectron2]: AP for 16: 0.0
[10/10 11:28:38 detectron2]: AP for 17: 0.0
[10/10 11:28:38 detectron2]: AP for 18: 0.0
[10/10 11:28:38 detectron2]: AP for 19: 0.0
[10/10 11:28:38 detectron2]: mAP: 5.451990091387415e-06
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:63: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
eval_info['predictions'][k] = np.array([torch.tensor(_).type(torch.FloatTensor).numpy() for _ in eval_info['predictions'][k]])
[10/10 11:28:40 detectron2]: AP for 0.0: 2.896947080444079e-06
[10/10 11:28:40 detectron2]: AP for 1.0: 0.0
[10/10 11:28:40 detectron2]: AP for 2.0: 7.036307215457782e-05
[10/10 11:28:40 detectron2]: AP for 3.0: 0.0
[10/10 11:28:40 detectron2]: AP for 4.0: 0.0
[10/10 11:28:40 detectron2]: AP for 5.0: 0.0
[10/10 11:28:40 detectron2]: AP for 6.0: 0.0
[10/10 11:28:40 detectron2]: AP for 7.0: 0.0
[10/10 11:28:40 detectron2]: AP for 8.0: 0.0
[10/10 11:28:40 detectron2]: AP for 9.0: 0.00013395248970482498
[10/10 11:28:40 detectron2]: AP for 10.0: 0.0
[10/10 11:28:40 detectron2]: AP for 11.0: 0.0
[10/10 11:28:40 detectron2]: AP for 12.0: 0.0
[10/10 11:28:41 detectron2]: AP for 13.0: 6.641517302341526e-06
[10/10 11:28:41 detectron2]: AP for 15.0: 0.0
[10/10 11:28:41 detectron2]: AP for 16.0: 0.0
[10/10 11:28:41 detectron2]: AP for 17.0: 0.0
[10/10 11:28:41 detectron2]: AP for 18.0: 0.0
[10/10 11:28:41 detectron2]: AP for 19.0: 0.0
[10/10 11:28:41 detectron2]: mAP: 1.125547532865312e-05
[10/10 11:28:41 detectron2]: AP for 0.0: 2.735836005740566e-06
[10/10 11:28:41 detectron2]: AP for 1.0: 0.0
[10/10 11:28:41 detectron2]: AP for 2.0: 6.679580110358074e-05
[10/10 11:28:41 detectron2]: AP for 3.0: 0.0
[10/10 11:28:41 detectron2]: AP for 4.0: 0.0
[10/10 11:28:41 detectron2]: AP for 5.0: 0.0
[10/10 11:28:41 detectron2]: AP for 6.0: 0.0
[10/10 11:28:41 detectron2]: AP for 7.0: 0.0
[10/10 11:28:41 detectron2]: AP for 8.0: 0.0
[10/10 11:28:41 detectron2]: AP for 9.0: 0.00012175324809504673
[10/10 11:28:41 detectron2]: AP for 10.0: 0.0
[10/10 11:28:41 detectron2]: AP for 11.0: 0.0
[10/10 11:28:41 detectron2]: AP for 12.0: 0.0
[10/10 11:28:41 detectron2]: AP for 13.0: 6.086168014007853e-06
[10/10 11:28:41 detectron2]: AP for 15.0: 0.0
[10/10 11:28:41 detectron2]: AP for 16.0: 0.0
[10/10 11:28:41 detectron2]: AP for 17.0: 0.0
[10/10 11:28:41 detectron2]: AP for 18.0: 0.0
[10/10 11:28:41 detectron2]: AP for 19.0: 0.0
[10/10 11:28:41 detectron2]: mAP: 1.0387950169388205e-05
[10/10 11:28:42 detectron2]: AP for 0.0: 2.585729816928506e-06
[10/10 11:28:42 detectron2]: AP for 1.0: 0.0
[10/10 11:28:42 detectron2]: AP for 2.0: 6.30000649834983e-05
[10/10 11:28:42 detectron2]: AP for 3.0: 0.0
[10/10 11:28:42 detectron2]: AP for 4.0: 0.0
[10/10 11:28:42 detectron2]: AP for 5.0: 0.0
[10/10 11:28:42 detectron2]: AP for 6.0: 0.0
[10/10 11:28:42 detectron2]: AP for 7.0: 0.0
[10/10 11:28:42 detectron2]: AP for 8.0: 0.0
[10/10 11:28:42 detectron2]: AP for 9.0: 0.00010935335740214214
[10/10 11:28:42 detectron2]: AP for 10.0: 0.0
[10/10 11:28:42 detectron2]: AP for 11.0: 0.0
[10/10 11:28:42 detectron2]: AP for 12.0: 0.0
[10/10 11:28:42 detectron2]: AP for 13.0: 5.587185569311259e-06
[10/10 11:28:42 detectron2]: AP for 15.0: 0.0
[10/10 11:28:42 detectron2]: AP for 16.0: 0.0
[10/10 11:28:42 detectron2]: AP for 17.0: 0.0
[10/10 11:28:42 detectron2]: AP for 18.0: 0.0
[10/10 11:28:42 detectron2]: AP for 19.0: 0.0
[10/10 11:28:42 detectron2]: mAP: 9.501385648036376e-06
[10/10 11:28:42 detectron2]: AP for 0.0: 2.4474111341987737e-06
[10/10 11:28:42 detectron2]: AP for 1.0: 0.0
[10/10 11:28:42 detectron2]: AP for 2.0: 5.934013825026341e-05
[10/10 11:28:42 detectron2]: AP for 3.0: 0.0
[10/10 11:28:42 detectron2]: AP for 4.0: 0.0
[10/10 11:28:42 detectron2]: AP for 5.0: 0.0
[10/10 11:28:42 detectron2]: AP for 6.0: 0.0
[10/10 11:28:43 detectron2]: AP for 7.0: 0.0
[10/10 11:28:43 detectron2]: AP for 8.0: 0.0
[10/10 11:28:43 detectron2]: AP for 9.0: 9.990010585170239e-05
[10/10 11:28:43 detectron2]: AP for 10.0: 0.0
[10/10 11:28:43 detectron2]: AP for 11.0: 0.0
[10/10 11:28:43 detectron2]: AP for 12.0: 0.0
[10/10 11:28:43 detectron2]: AP for 13.0: 5.13871964358259e-06
[10/10 11:28:43 detectron2]: AP for 15.0: 0.0
[10/10 11:28:43 detectron2]: AP for 16.0: 0.0
[10/10 11:28:43 detectron2]: AP for 17.0: 0.0
[10/10 11:28:43 detectron2]: AP for 18.0: 0.0
[10/10 11:28:43 detectron2]: AP for 19.0: 0.0
[10/10 11:28:43 detectron2]: mAP: 8.780335519986693e-06
[10/10 11:28:43 detectron2]: AP for 0.0: 2.3299867280002218e-06
[10/10 11:28:43 detectron2]: AP for 1.0: 0.0
[10/10 11:28:43 detectron2]: AP for 2.0: 5.4797521443106234e-05
[10/10 11:28:43 detectron2]: AP for 3.0: 0.0
[10/10 11:28:43 detectron2]: AP for 4.0: 0.0
[10/10 11:28:43 detectron2]: AP for 5.0: 0.0
[10/10 11:28:43 detectron2]: AP for 6.0: 0.0
[10/10 11:28:43 detectron2]: AP for 7.0: 0.0
[10/10 11:28:44 detectron2]: AP for 8.0: 0.0
[10/10 11:28:44 detectron2]: AP for 9.0: 8.895213977666572e-05
[10/10 11:28:44 detectron2]: AP for 10.0: 0.0
[10/10 11:28:44 detectron2]: AP for 11.0: 0.0
[10/10 11:28:44 detectron2]: AP for 12.0: 0.0
[10/10 11:28:44 detectron2]: AP for 13.0: 4.765626727021299e-06
[10/10 11:28:44 detectron2]: AP for 15.0: 0.0
[10/10 11:28:44 detectron2]: AP for 16.0: 0.0
[10/10 11:28:44 detectron2]: AP for 17.0: 0.0
[10/10 11:28:44 detectron2]: AP for 18.0: 0.0
[10/10 11:28:44 detectron2]: AP for 19.0: 0.0
[10/10 11:28:44 detectron2]: mAP: 7.93922481534537e-06
[10/10 11:28:44 detectron2]: AP for 0.0: 2.2166461803863058e-06
[10/10 11:28:44 detectron2]: AP for 1.0: 0.0
[10/10 11:28:44 detectron2]: AP for 2.0: 5.150656579644419e-05
[10/10 11:28:44 detectron2]: AP for 3.0: 0.0
[10/10 11:28:44 detectron2]: AP for 4.0: 0.0
[10/10 11:28:44 detectron2]: AP for 5.0: 0.0
[10/10 11:28:44 detectron2]: AP for 6.0: 0.0
[10/10 11:28:44 detectron2]: AP for 7.0: 0.0
[10/10 11:28:44 detectron2]: AP for 8.0: 0.0
[10/10 11:28:44 detectron2]: AP for 9.0: 8.148409688146785e-05
[10/10 11:28:45 detectron2]: AP for 10.0: 0.0
[10/10 11:28:45 detectron2]: AP for 11.0: 0.0
[10/10 11:28:45 detectron2]: AP for 12.0: 0.0
[10/10 11:28:45 detectron2]: AP for 13.0: 4.396841177367605e-06
[10/10 11:28:45 detectron2]: AP for 15.0: 0.0
[10/10 11:28:45 detectron2]: AP for 16.0: 0.0
[10/10 11:28:45 detectron2]: AP for 17.0: 0.0
[10/10 11:28:45 detectron2]: AP for 18.0: 0.0
[10/10 11:28:45 detectron2]: AP for 19.0: 0.0
[10/10 11:28:45 detectron2]: mAP: 7.347587597905658e-06
[10/10 11:28:45 detectron2]: AP for 0.0: 2.1033060875197407e-06
[10/10 11:28:45 detectron2]: AP for 1.0: 0.0
[10/10 11:28:45 detectron2]: AP for 2.0: 4.9380279961042106e-05
[10/10 11:28:45 detectron2]: AP for 3.0: 0.0
[10/10 11:28:45 detectron2]: AP for 4.0: 0.0
[10/10 11:28:45 detectron2]: AP for 5.0: 0.0
[10/10 11:28:45 detectron2]: AP for 6.0: 0.0
[10/10 11:28:45 detectron2]: AP for 7.0: 0.0
[10/10 11:28:45 detectron2]: AP for 8.0: 0.0
[10/10 11:28:45 detectron2]: AP for 9.0: 7.429236575262621e-05
[10/10 11:28:46 detectron2]: AP for 10.0: 0.0
[10/10 11:28:46 detectron2]: AP for 11.0: 0.0
[10/10 11:28:46 detectron2]: AP for 12.0: 0.0
[10/10 11:28:46 detectron2]: AP for 13.0: 4.0986965359479655e-06
[10/10 11:28:46 detectron2]: AP for 15.0: 0.0
[10/10 11:28:46 detectron2]: AP for 16.0: 0.0
[10/10 11:28:46 detectron2]: AP for 17.0: 0.0
[10/10 11:28:46 detectron2]: AP for 18.0: 0.0
[10/10 11:28:46 detectron2]: AP for 19.0: 0.0
[10/10 11:28:46 detectron2]: mAP: 6.835507974756183e-06
[10/10 11:28:46 detectron2]: AP for 0.0: 2.0148736439296044e-06
[10/10 11:28:46 detectron2]: AP for 1.0: 0.0
[10/10 11:28:47 detectron2]: AP for 2.0: 4.75465931231156e-05
[10/10 11:28:47 detectron2]: AP for 3.0: 0.0
[10/10 11:28:47 detectron2]: AP for 4.0: 0.0
[10/10 11:28:47 detectron2]: AP for 5.0: 0.0
[10/10 11:28:47 detectron2]: AP for 6.0: 0.0
[10/10 11:28:47 detectron2]: AP for 7.0: 0.0
[10/10 11:28:47 detectron2]: AP for 8.0: 0.0
[10/10 11:28:47 detectron2]: AP for 9.0: 6.867975025670603e-05
[10/10 11:28:47 detectron2]: AP for 10.0: 0.0
[10/10 11:28:47 detectron2]: AP for 11.0: 0.0
[10/10 11:28:47 detectron2]: AP for 12.0: 0.0
[10/10 11:28:47 detectron2]: AP for 13.0: 3.874572485074168e-06
[10/10 11:28:47 detectron2]: AP for 15.0: 0.0
[10/10 11:28:47 detectron2]: AP for 16.0: 0.0
[10/10 11:28:47 detectron2]: AP for 17.0: 0.0
[10/10 11:28:47 detectron2]: AP for 18.0: 0.0
[10/10 11:28:47 detectron2]: AP for 19.0: 0.0
[10/10 11:28:47 detectron2]: mAP: 6.427146672649542e-06
[10/10 11:28:48 detectron2]: AP for 0.0: 1.926529876072891e-06
[10/10 11:28:48 detectron2]: AP for 1.0: 0.0
[10/10 11:28:48 detectron2]: AP for 2.0: 4.56370908068493e-05
[10/10 11:28:48 detectron2]: AP for 3.0: 0.0
[10/10 11:28:48 detectron2]: AP for 4.0: 0.0
[10/10 11:28:48 detectron2]: AP for 5.0: 0.0
[10/10 11:28:48 detectron2]: AP for 6.0: 0.0
[10/10 11:28:48 detectron2]: AP for 7.0: 0.0
[10/10 11:28:48 detectron2]: AP for 8.0: 0.0
[10/10 11:28:48 detectron2]: AP for 9.0: 6.498148286482319e-05
[10/10 11:28:48 detectron2]: AP for 10.0: 0.0
[10/10 11:28:48 detectron2]: AP for 11.0: 0.0
[10/10 11:28:48 detectron2]: AP for 12.0: 0.0
[10/10 11:28:48 detectron2]: AP for 13.0: 3.6445273963181535e-06
[10/10 11:28:48 detectron2]: AP for 15.0: 0.0
[10/10 11:28:48 detectron2]: AP for 16.0: 0.0
[10/10 11:28:48 detectron2]: AP for 17.0: 0.0
[10/10 11:28:48 detectron2]: AP for 18.0: 0.0
[10/10 11:28:48 detectron2]: AP for 19.0: 0.0
[10/10 11:28:48 detectron2]: mAP: 6.115243650128832e-06
[10/10 11:28:49 detectron2]: AP for 0.0: 1.848534793680301e-06
[10/10 11:28:49 detectron2]: AP for 1.0: 0.0
[10/10 11:28:49 detectron2]: AP for 2.0: 4.347637150203809e-05
[10/10 11:28:49 detectron2]: AP for 3.0: 0.0
[10/10 11:28:49 detectron2]: AP for 4.0: 0.0
[10/10 11:28:49 detectron2]: AP for 5.0: 0.0
[10/10 11:28:49 detectron2]: AP for 6.0: 0.0
[10/10 11:28:49 detectron2]: AP for 7.0: 0.0
[10/10 11:28:49 detectron2]: AP for 8.0: 0.0
[10/10 11:28:49 detectron2]: AP for 9.0: 6.008532363921404e-05
[10/10 11:28:49 detectron2]: AP for 10.0: 0.0
[10/10 11:28:49 detectron2]: AP for 11.0: 0.0
[10/10 11:28:49 detectron2]: AP for 12.0: 0.0
[10/10 11:28:50 detectron2]: AP for 13.0: 3.4480974591133418e-06
[10/10 11:28:50 detectron2]: AP for 15.0: 0.0
[10/10 11:28:50 detectron2]: AP for 16.0: 0.0
[10/10 11:28:50 detectron2]: AP for 17.0: 0.0
[10/10 11:28:50 detectron2]: AP for 18.0: 0.0
[10/10 11:28:50 detectron2]: AP for 19.0: 0.0
[10/10 11:28:50 detectron2]: mAP: 5.729385975428158e-06
[10/10 11:28:50 detectron2]: AP for 0.0: 1.778590444700967e-06
[10/10 11:28:50 detectron2]: AP for 1.0: 0.0
[10/10 11:28:50 detectron2]: AP for 2.0: 4.170141983195208e-05
[10/10 11:28:50 detectron2]: AP for 3.0: 0.0
[10/10 11:28:50 detectron2]: AP for 4.0: 0.0
[10/10 11:28:50 detectron2]: AP for 5.0: 0.0
[10/10 11:28:50 detectron2]: AP for 6.0: 0.0
[10/10 11:28:50 detectron2]: AP for 7.0: 0.0
[10/10 11:28:50 detectron2]: AP for 8.0: 0.0
[10/10 11:28:50 detectron2]: AP for 9.0: 5.68300238228403e-05
[10/10 11:28:51 detectron2]: AP for 10.0: 0.0
[10/10 11:28:51 detectron2]: AP for 11.0: 0.0
[10/10 11:28:51 detectron2]: AP for 12.0: 0.0
[10/10 11:28:51 detectron2]: AP for 13.0: 3.277775022070273e-06
[10/10 11:28:51 detectron2]: AP for 15.0: 0.0
[10/10 11:28:51 detectron2]: AP for 16.0: 0.0
[10/10 11:28:51 detectron2]: AP for 17.0: 0.0
[10/10 11:28:51 detectron2]: AP for 18.0: 0.0
[10/10 11:28:51 detectron2]: AP for 19.0: 0.0
[10/10 11:28:51 detectron2]: mAP: 5.451990091387415e-06
Could you explain a little about how to infer these numbers, and most of them are nearly 0. There are new files created under PR/ and all of them have 0.0% across the Precision vs Recall graph.
Hi, It looks like you aren't using a trained model, please refer to the updated README. Also, the logging messages have been updated to better explain the values. Thanks!
Hello Akshay, I trained the model as mentioned in the updated README.
custom_voc_2007_test
. Then evaluation happens for WR1_Mixed_Unknowns
. In the logs, I notice that there are multiple sequences of evaluation after evaluating WR1_Mixed_Unknowns
. Could you explain how to understand the 2nd evaluation values?Log:
<starting log cropped>
[32m[11/05 16:03:58 fvcore.common.checkpoint]: [0mLoading checkpoint from ./output/model_final.pth
[5m[31mWARNING[0m [32m[11/05 16:03:58 d2.data.datasets.coco]: [0m
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.
[32m[11/05 16:03:58 d2.data.datasets.coco]: [0mLoaded 4952 images in COCO format from protocol/custom_protocols/custom_voc_2007_test.json
[32m[11/05 16:03:59 d2.data.build]: [0mDistribution of instances among all 21 categories:
[36m| category | #instances | category | #instances | category | #instances |
|:----------:|:-------------|:-----------:|:-------------|:-----------:|:-------------|
| unknown | 0 | aeroplane | 311 | bicycle | 389 |
| bird | 576 | boat | 393 | bottle | 657 |
| bus | 254 | car | 1541 | cat | 370 |
| chair | 1374 | cow | 329 | diningtable | 299 |
| dog | 530 | horse | 395 | motorbike | 369 |
| person | 5227 | pottedplant | 592 | sheep | 311 |
| sofa | 396 | train | 302 | tvmonitor | 361 |
| | | | | | |
| total | 14976 | | | | |[0m
[32m[11/05 16:03:59 d2.data.common]: [0mSerializing 4952 elements to byte tensors and concatenating them all ...
[32m[11/05 16:03:59 d2.data.common]: [0mSerialized dataset takes 1.87 MiB
[32m[11/05 16:03:59 d2.data.dataset_mapper]: [0m[DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[32m[11/05 16:03:59 d2.evaluation.evaluator]: [0mStart inference on 4952 images
[32m[11/05 16:04:00 d2.evaluation.evaluator]: [0mInference done 11/4952. 0.0777 s / img. ETA=0:06:33
[32m[11/05 16:04:05 d2.evaluation.evaluator]: [0mInference done 75/4952. 0.0772 s / img. ETA=0:06:26
[32m[11/05 16:04:10 d2.evaluation.evaluator]: [0mInference done 138/4952. 0.0775 s / img. ETA=0:06:23
[32m[11/05 16:04:15 d2.evaluation.evaluator]: [0mInference done 202/4952. 0.0773 s / img. ETA=0:06:17
[32m[11/05 16:04:21 d2.evaluation.evaluator]: [0mInference done 265/4952. 0.0774 s / img. ETA=0:06:12
[32m[11/05 16:04:26 d2.evaluation.evaluator]: [0mInference done 328/4952. 0.0774 s / img. ETA=0:06:07
[32m[11/05 16:04:31 d2.evaluation.evaluator]: [0mInference done 390/4952. 0.0776 s / img. ETA=0:06:03
[32m[11/05 16:04:36 d2.evaluation.evaluator]: [0mInference done 453/4952. 0.0776 s / img. ETA=0:05:58
[32m[11/05 16:04:41 d2.evaluation.evaluator]: [0mInference done 516/4952. 0.0776 s / img. ETA=0:05:53
[32m[11/05 16:04:46 d2.evaluation.evaluator]: [0mInference done 579/4952. 0.0776 s / img. ETA=0:05:48
[32m[11/05 16:04:51 d2.evaluation.evaluator]: [0mInference done 642/4952. 0.0776 s / img. ETA=0:05:43
[32m[11/05 16:04:56 d2.evaluation.evaluator]: [0mInference done 705/4952. 0.0776 s / img. ETA=0:05:38
[32m[11/05 16:05:01 d2.evaluation.evaluator]: [0mInference done 767/4952. 0.0777 s / img. ETA=0:05:34
[32m[11/05 16:05:06 d2.evaluation.evaluator]: [0mInference done 830/4952. 0.0777 s / img. ETA=0:05:29
[32m[11/05 16:05:11 d2.evaluation.evaluator]: [0mInference done 892/4952. 0.0778 s / img. ETA=0:05:24
[32m[11/05 16:05:16 d2.evaluation.evaluator]: [0mInference done 955/4952. 0.0778 s / img. ETA=0:05:19
[32m[11/05 16:05:21 d2.evaluation.evaluator]: [0mInference done 1018/4952. 0.0779 s / img. ETA=0:05:14
[32m[11/05 16:05:26 d2.evaluation.evaluator]: [0mInference done 1080/4952. 0.0779 s / img. ETA=0:05:09
[32m[11/05 16:05:31 d2.evaluation.evaluator]: [0mInference done 1143/4952. 0.0779 s / img. ETA=0:05:04
[32m[11/05 16:05:36 d2.evaluation.evaluator]: [0mInference done 1205/4952. 0.0780 s / img. ETA=0:05:00
[32m[11/05 16:05:41 d2.evaluation.evaluator]: [0mInference done 1268/4952. 0.0780 s / img. ETA=0:04:55
[32m[11/05 16:05:46 d2.evaluation.evaluator]: [0mInference done 1331/4952. 0.0780 s / img. ETA=0:04:50
[32m[11/05 16:05:51 d2.evaluation.evaluator]: [0mInference done 1394/4952. 0.0780 s / img. ETA=0:04:45
[32m[11/05 16:05:56 d2.evaluation.evaluator]: [0mInference done 1456/4952. 0.0780 s / img. ETA=0:04:40
[32m[11/05 16:06:01 d2.evaluation.evaluator]: [0mInference done 1519/4952. 0.0780 s / img. ETA=0:04:35
[32m[11/05 16:06:06 d2.evaluation.evaluator]: [0mInference done 1582/4952. 0.0781 s / img. ETA=0:04:30
[32m[11/05 16:06:11 d2.evaluation.evaluator]: [0mInference done 1645/4952. 0.0781 s / img. ETA=0:04:25
[32m[11/05 16:06:16 d2.evaluation.evaluator]: [0mInference done 1708/4952. 0.0781 s / img. ETA=0:04:20
[32m[11/05 16:06:21 d2.evaluation.evaluator]: [0mInference done 1771/4952. 0.0781 s / img. ETA=0:04:15
[32m[11/05 16:06:26 d2.evaluation.evaluator]: [0mInference done 1834/4952. 0.0781 s / img. ETA=0:04:09
[32m[11/05 16:06:32 d2.evaluation.evaluator]: [0mInference done 1897/4952. 0.0781 s / img. ETA=0:04:04
[32m[11/05 16:06:37 d2.evaluation.evaluator]: [0mInference done 1960/4952. 0.0781 s / img. ETA=0:03:59
[32m[11/05 16:06:42 d2.evaluation.evaluator]: [0mInference done 2024/4952. 0.0780 s / img. ETA=0:03:54
[32m[11/05 16:06:47 d2.evaluation.evaluator]: [0mInference done 2086/4952. 0.0781 s / img. ETA=0:03:49
[32m[11/05 16:06:52 d2.evaluation.evaluator]: [0mInference done 2149/4952. 0.0781 s / img. ETA=0:03:44
[32m[11/05 16:06:57 d2.evaluation.evaluator]: [0mInference done 2212/4952. 0.0781 s / img. ETA=0:03:39
[32m[11/05 16:07:02 d2.evaluation.evaluator]: [0mInference done 2275/4952. 0.0781 s / img. ETA=0:03:34
[32m[11/05 16:07:07 d2.evaluation.evaluator]: [0mInference done 2338/4952. 0.0781 s / img. ETA=0:03:29
[32m[11/05 16:07:12 d2.evaluation.evaluator]: [0mInference done 2401/4952. 0.0781 s / img. ETA=0:03:24
[32m[11/05 16:07:17 d2.evaluation.evaluator]: [0mInference done 2464/4952. 0.0781 s / img. ETA=0:03:19
[32m[11/05 16:07:22 d2.evaluation.evaluator]: [0mInference done 2526/4952. 0.0781 s / img. ETA=0:03:14
[32m[11/05 16:07:27 d2.evaluation.evaluator]: [0mInference done 2588/4952. 0.0781 s / img. ETA=0:03:09
[32m[11/05 16:07:32 d2.evaluation.evaluator]: [0mInference done 2651/4952. 0.0781 s / img. ETA=0:03:04
[32m[11/05 16:07:37 d2.evaluation.evaluator]: [0mInference done 2713/4952. 0.0782 s / img. ETA=0:02:59
[32m[11/05 16:07:42 d2.evaluation.evaluator]: [0mInference done 2775/4952. 0.0782 s / img. ETA=0:02:54
[32m[11/05 16:07:47 d2.evaluation.evaluator]: [0mInference done 2838/4952. 0.0782 s / img. ETA=0:02:49
[32m[11/05 16:07:52 d2.evaluation.evaluator]: [0mInference done 2902/4952. 0.0782 s / img. ETA=0:02:44
[32m[11/05 16:07:57 d2.evaluation.evaluator]: [0mInference done 2964/4952. 0.0782 s / img. ETA=0:02:39
[32m[11/05 16:08:02 d2.evaluation.evaluator]: [0mInference done 3025/4952. 0.0782 s / img. ETA=0:02:34
[32m[11/05 16:08:08 d2.evaluation.evaluator]: [0mInference done 3088/4952. 0.0782 s / img. ETA=0:02:29
[32m[11/05 16:08:13 d2.evaluation.evaluator]: [0mInference done 3150/4952. 0.0783 s / img. ETA=0:02:24
[32m[11/05 16:08:18 d2.evaluation.evaluator]: [0mInference done 3212/4952. 0.0783 s / img. ETA=0:02:19
[32m[11/05 16:08:23 d2.evaluation.evaluator]: [0mInference done 3274/4952. 0.0783 s / img. ETA=0:02:14
[32m[11/05 16:08:28 d2.evaluation.evaluator]: [0mInference done 3336/4952. 0.0783 s / img. ETA=0:02:09
[32m[11/05 16:08:33 d2.evaluation.evaluator]: [0mInference done 3398/4952. 0.0783 s / img. ETA=0:02:04
[32m[11/05 16:08:38 d2.evaluation.evaluator]: [0mInference done 3461/4952. 0.0783 s / img. ETA=0:01:59
[32m[11/05 16:08:43 d2.evaluation.evaluator]: [0mInference done 3522/4952. 0.0783 s / img. ETA=0:01:55
[32m[11/05 16:08:48 d2.evaluation.evaluator]: [0mInference done 3584/4952. 0.0784 s / img. ETA=0:01:50
[32m[11/05 16:08:53 d2.evaluation.evaluator]: [0mInference done 3646/4952. 0.0784 s / img. ETA=0:01:45
[32m[11/05 16:08:58 d2.evaluation.evaluator]: [0mInference done 3708/4952. 0.0784 s / img. ETA=0:01:40
[32m[11/05 16:09:03 d2.evaluation.evaluator]: [0mInference done 3770/4952. 0.0784 s / img. ETA=0:01:35
[32m[11/05 16:09:08 d2.evaluation.evaluator]: [0mInference done 3833/4952. 0.0784 s / img. ETA=0:01:30
[32m[11/05 16:09:13 d2.evaluation.evaluator]: [0mInference done 3895/4952. 0.0784 s / img. ETA=0:01:25
[32m[11/05 16:09:18 d2.evaluation.evaluator]: [0mInference done 3958/4952. 0.0784 s / img. ETA=0:01:20
[32m[11/05 16:09:23 d2.evaluation.evaluator]: [0mInference done 4019/4952. 0.0784 s / img. ETA=0:01:15
[32m[11/05 16:09:28 d2.evaluation.evaluator]: [0mInference done 4081/4952. 0.0784 s / img. ETA=0:01:10
[32m[11/05 16:09:33 d2.evaluation.evaluator]: [0mInference done 4143/4952. 0.0784 s / img. ETA=0:01:05
[32m[11/05 16:09:38 d2.evaluation.evaluator]: [0mInference done 4205/4952. 0.0784 s / img. ETA=0:01:00
[32m[11/05 16:09:43 d2.evaluation.evaluator]: [0mInference done 4266/4952. 0.0785 s / img. ETA=0:00:55
[32m[11/05 16:09:48 d2.evaluation.evaluator]: [0mInference done 4328/4952. 0.0785 s / img. ETA=0:00:50
[32m[11/05 16:09:53 d2.evaluation.evaluator]: [0mInference done 4389/4952. 0.0785 s / img. ETA=0:00:45
[32m[11/05 16:09:58 d2.evaluation.evaluator]: [0mInference done 4452/4952. 0.0785 s / img. ETA=0:00:40
[32m[11/05 16:10:03 d2.evaluation.evaluator]: [0mInference done 4514/4952. 0.0785 s / img. ETA=0:00:35
[32m[11/05 16:10:08 d2.evaluation.evaluator]: [0mInference done 4576/4952. 0.0785 s / img. ETA=0:00:30
[32m[11/05 16:10:13 d2.evaluation.evaluator]: [0mInference done 4638/4952. 0.0785 s / img. ETA=0:00:25
[32m[11/05 16:10:18 d2.evaluation.evaluator]: [0mInference done 4699/4952. 0.0786 s / img. ETA=0:00:20
[32m[11/05 16:10:23 d2.evaluation.evaluator]: [0mInference done 4761/4952. 0.0786 s / img. ETA=0:00:15
[32m[11/05 16:10:29 d2.evaluation.evaluator]: [0mInference done 4823/4952. 0.0786 s / img. ETA=0:00:10
[32m[11/05 16:10:34 d2.evaluation.evaluator]: [0mInference done 4886/4952. 0.0786 s / img. ETA=0:00:05
[32m[11/05 16:10:39 d2.evaluation.evaluator]: [0mInference done 4948/4952. 0.0786 s / img. ETA=0:00:00
[32m[11/05 16:10:39 d2.evaluation.evaluator]: [0mTotal inference time: 0:06:39.141057 (0.080683 s / img per device, on 1 devices)
[32m[11/05 16:10:39 d2.evaluation.evaluator]: [0mTotal inference pure compute time: 0:06:28 (0.078568 s / img per device, on 1 devices)
[32m[11/05 16:10:39 detectron2]: [0mImage level evaluation complete for custom_voc_2007_test
[32m[11/05 16:10:39 detectron2]: [0mResults for custom_voc_2007_test
[32m[11/05 16:10:39 detectron2]: [0mAP for 0: 0.808440625667572
[32m[11/05 16:10:39 detectron2]: [0mAP for 1: 0.7770674228668213
[32m[11/05 16:10:40 detectron2]: [0mAP for 2: 0.7009527683258057
[32m[11/05 16:10:40 detectron2]: [0mAP for 3: 0.5980304479598999
[32m[11/05 16:10:40 detectron2]: [0mAP for 4: 0.5711202621459961
[32m[11/05 16:10:40 detectron2]: [0mAP for 5: 0.7750421762466431
[32m[11/05 16:10:40 detectron2]: [0mAP for 6: 0.7938251495361328
[32m[11/05 16:10:41 detectron2]: [0mAP for 7: 0.88051837682724
[32m[11/05 16:10:41 detectron2]: [0mAP for 8: 0.5696967840194702
[32m[11/05 16:10:41 detectron2]: [0mAP for 9: 0.7881757616996765
[32m[11/05 16:10:41 detectron2]: [0mAP for 10: 0.6557427048683167
[32m[11/05 16:10:41 detectron2]: [0mAP for 11: 0.8663510680198669
[32m[11/05 16:10:41 detectron2]: [0mAP for 12: 0.8503947854042053
[32m[11/05 16:10:41 detectron2]: [0mAP for 13: 0.8271569609642029
[32m[11/05 16:10:42 detectron2]: [0mAP for 14: 0.788679838180542
[32m[11/05 16:10:42 detectron2]: [0mAP for 15: 0.4945039451122284
[32m[11/05 16:10:42 detectron2]: [0mAP for 16: 0.6859263181686401
[32m[11/05 16:10:42 detectron2]: [0mAP for 17: 0.6699324250221252
[32m[11/05 16:10:42 detectron2]: [0mAP for 18: 0.8395748734474182
[32m[11/05 16:10:43 detectron2]: [0mAP for 19: 0.7555124163627625
[32m[11/05 16:10:43 detectron2]: [0mmAP: 0.7348322868347168
[5m[31mWARNING[0m [32m[11/05 16:10:43 d2.data.datasets.coco]: [0m
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.
[32m[11/05 16:10:43 d2.data.datasets.coco]: [0mLoaded 4952 images in COCO format from protocol/custom_protocols/WR1_Mixed_Unknowns.json
[32m[11/05 16:10:43 d2.data.build]: [0mDistribution of instances among all 21 categories:
[36m| category | #instances | category | #instances | category | #instances |
|:----------:|:-------------|:-----------:|:-------------|:-----------:|:-------------|
| unknown | 15235 | aeroplane | 0 | bicycle | 0 |
| bird | 0 | boat | 0 | bottle | 0 |
| bus | 0 | car | 0 | cat | 0 |
| chair | 0 | cow | 0 | diningtable | 0 |
| dog | 0 | horse | 0 | motorbike | 0 |
| person | 0 | pottedplant | 0 | sheep | 0 |
| sofa | 0 | train | 0 | tvmonitor | 0 |
| | | | | | |
| total | 15235 | | | | |[0m
[32m[11/05 16:10:43 d2.data.common]: [0mSerializing 4952 elements to byte tensors and concatenating them all ...
[32m[11/05 16:10:43 d2.data.common]: [0mSerialized dataset takes 8.39 MiB
[32m[11/05 16:10:43 d2.data.dataset_mapper]: [0m[DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[32m[11/05 16:10:43 d2.evaluation.evaluator]: [0mStart inference on 4952 images
[32m[11/05 16:10:45 d2.evaluation.evaluator]: [0mInference done 11/4952. 0.0794 s / img. ETA=0:06:40
[32m[11/05 16:10:50 d2.evaluation.evaluator]: [0mInference done 74/4952. 0.0786 s / img. ETA=0:06:33
[32m[11/05 16:10:55 d2.evaluation.evaluator]: [0mInference done 136/4952. 0.0788 s / img. ETA=0:06:28
[32m[11/05 16:11:00 d2.evaluation.evaluator]: [0mInference done 198/4952. 0.0790 s / img. ETA=0:06:24
[32m[11/05 16:11:05 d2.evaluation.evaluator]: [0mInference done 260/4952. 0.0791 s / img. ETA=0:06:20
[32m[11/05 16:11:10 d2.evaluation.evaluator]: [0mInference done 321/4952. 0.0794 s / img. ETA=0:06:16
[32m[11/05 16:11:15 d2.evaluation.evaluator]: [0mInference done 383/4952. 0.0794 s / img. ETA=0:06:11
[32m[11/05 16:11:20 d2.evaluation.evaluator]: [0mInference done 445/4952. 0.0793 s / img. ETA=0:06:06
[32m[11/05 16:11:25 d2.evaluation.evaluator]: [0mInference done 506/4952. 0.0795 s / img. ETA=0:06:01
[32m[11/05 16:11:30 d2.evaluation.evaluator]: [0mInference done 567/4952. 0.0795 s / img. ETA=0:05:57
[32m[11/05 16:11:35 d2.evaluation.evaluator]: [0mInference done 629/4952. 0.0796 s / img. ETA=0:05:52
[32m[11/05 16:11:40 d2.evaluation.evaluator]: [0mInference done 691/4952. 0.0796 s / img. ETA=0:05:47
[32m[11/05 16:11:45 d2.evaluation.evaluator]: [0mInference done 752/4952. 0.0796 s / img. ETA=0:05:42
[32m[11/05 16:11:50 d2.evaluation.evaluator]: [0mInference done 815/4952. 0.0795 s / img. ETA=0:05:37
[32m[11/05 16:11:55 d2.evaluation.evaluator]: [0mInference done 877/4952. 0.0795 s / img. ETA=0:05:32
[32m[11/05 16:12:00 d2.evaluation.evaluator]: [0mInference done 938/4952. 0.0796 s / img. ETA=0:05:27
[32m[11/05 16:12:05 d2.evaluation.evaluator]: [0mInference done 999/4952. 0.0796 s / img. ETA=0:05:22
[32m[11/05 16:12:10 d2.evaluation.evaluator]: [0mInference done 1061/4952. 0.0796 s / img. ETA=0:05:17
[32m[11/05 16:12:15 d2.evaluation.evaluator]: [0mInference done 1122/4952. 0.0797 s / img. ETA=0:05:12
[32m[11/05 16:12:20 d2.evaluation.evaluator]: [0mInference done 1184/4952. 0.0796 s / img. ETA=0:05:07
[32m[11/05 16:12:26 d2.evaluation.evaluator]: [0mInference done 1246/4952. 0.0796 s / img. ETA=0:05:02
[32m[11/05 16:12:31 d2.evaluation.evaluator]: [0mInference done 1306/4952. 0.0797 s / img. ETA=0:04:57
[32m[11/05 16:12:36 d2.evaluation.evaluator]: [0mInference done 1366/4952. 0.0798 s / img. ETA=0:04:53
[32m[11/05 16:12:41 d2.evaluation.evaluator]: [0mInference done 1428/4952. 0.0798 s / img. ETA=0:04:48
[32m[11/05 16:12:46 d2.evaluation.evaluator]: [0mInference done 1490/4952. 0.0798 s / img. ETA=0:04:43
[32m[11/05 16:12:51 d2.evaluation.evaluator]: [0mInference done 1552/4952. 0.0798 s / img. ETA=0:04:38
[32m[11/05 16:12:56 d2.evaluation.evaluator]: [0mInference done 1613/4952. 0.0798 s / img. ETA=0:04:33
[32m[11/05 16:13:01 d2.evaluation.evaluator]: [0mInference done 1675/4952. 0.0798 s / img. ETA=0:04:28
[32m[11/05 16:13:06 d2.evaluation.evaluator]: [0mInference done 1737/4952. 0.0798 s / img. ETA=0:04:22
[32m[11/05 16:13:11 d2.evaluation.evaluator]: [0mInference done 1798/4952. 0.0798 s / img. ETA=0:04:18
[32m[11/05 16:13:16 d2.evaluation.evaluator]: [0mInference done 1861/4952. 0.0798 s / img. ETA=0:04:12
[32m[11/05 16:13:21 d2.evaluation.evaluator]: [0mInference done 1923/4952. 0.0798 s / img. ETA=0:04:07
[32m[11/05 16:13:26 d2.evaluation.evaluator]: [0mInference done 1984/4952. 0.0798 s / img. ETA=0:04:02
[32m[11/05 16:13:31 d2.evaluation.evaluator]: [0mInference done 2044/4952. 0.0799 s / img. ETA=0:03:58
[32m[11/05 16:13:36 d2.evaluation.evaluator]: [0mInference done 2105/4952. 0.0799 s / img. ETA=0:03:53
[32m[11/05 16:13:41 d2.evaluation.evaluator]: [0mInference done 2167/4952. 0.0799 s / img. ETA=0:03:48
[32m[11/05 16:13:46 d2.evaluation.evaluator]: [0mInference done 2228/4952. 0.0799 s / img. ETA=0:03:43
[32m[11/05 16:13:51 d2.evaluation.evaluator]: [0mInference done 2289/4952. 0.0799 s / img. ETA=0:03:38
[32m[11/05 16:13:56 d2.evaluation.evaluator]: [0mInference done 2350/4952. 0.0799 s / img. ETA=0:03:33
[32m[11/05 16:14:01 d2.evaluation.evaluator]: [0mInference done 2411/4952. 0.0800 s / img. ETA=0:03:28
[32m[11/05 16:14:06 d2.evaluation.evaluator]: [0mInference done 2472/4952. 0.0800 s / img. ETA=0:03:23
[32m[11/05 16:14:12 d2.evaluation.evaluator]: [0mInference done 2534/4952. 0.0800 s / img. ETA=0:03:18
[32m[11/05 16:14:17 d2.evaluation.evaluator]: [0mInference done 2595/4952. 0.0800 s / img. ETA=0:03:13
[32m[11/05 16:14:22 d2.evaluation.evaluator]: [0mInference done 2657/4952. 0.0800 s / img. ETA=0:03:08
[32m[11/05 16:14:27 d2.evaluation.evaluator]: [0mInference done 2718/4952. 0.0800 s / img. ETA=0:03:03
[32m[11/05 16:14:32 d2.evaluation.evaluator]: [0mInference done 2780/4952. 0.0800 s / img. ETA=0:02:58
[32m[11/05 16:14:37 d2.evaluation.evaluator]: [0mInference done 2842/4952. 0.0800 s / img. ETA=0:02:53
[32m[11/05 16:14:42 d2.evaluation.evaluator]: [0mInference done 2904/4952. 0.0800 s / img. ETA=0:02:47
[32m[11/05 16:14:47 d2.evaluation.evaluator]: [0mInference done 2966/4952. 0.0800 s / img. ETA=0:02:42
[32m[11/05 16:14:52 d2.evaluation.evaluator]: [0mInference done 3028/4952. 0.0800 s / img. ETA=0:02:37
[32m[11/05 16:14:57 d2.evaluation.evaluator]: [0mInference done 3090/4952. 0.0800 s / img. ETA=0:02:32
[32m[11/05 16:15:02 d2.evaluation.evaluator]: [0mInference done 3152/4952. 0.0800 s / img. ETA=0:02:27
[32m[11/05 16:15:07 d2.evaluation.evaluator]: [0mInference done 3214/4952. 0.0799 s / img. ETA=0:02:22
[32m[11/05 16:15:12 d2.evaluation.evaluator]: [0mInference done 3275/4952. 0.0800 s / img. ETA=0:02:17
[32m[11/05 16:15:17 d2.evaluation.evaluator]: [0mInference done 3336/4952. 0.0800 s / img. ETA=0:02:12
[32m[11/05 16:15:22 d2.evaluation.evaluator]: [0mInference done 3397/4952. 0.0800 s / img. ETA=0:02:07
[32m[11/05 16:15:27 d2.evaluation.evaluator]: [0mInference done 3460/4952. 0.0799 s / img. ETA=0:02:02
[32m[11/05 16:15:32 d2.evaluation.evaluator]: [0mInference done 3521/4952. 0.0799 s / img. ETA=0:01:57
[32m[11/05 16:15:37 d2.evaluation.evaluator]: [0mInference done 3582/4952. 0.0800 s / img. ETA=0:01:52
[32m[11/05 16:15:42 d2.evaluation.evaluator]: [0mInference done 3643/4952. 0.0800 s / img. ETA=0:01:47
[32m[11/05 16:15:47 d2.evaluation.evaluator]: [0mInference done 3705/4952. 0.0800 s / img. ETA=0:01:42
[32m[11/05 16:15:52 d2.evaluation.evaluator]: [0mInference done 3767/4952. 0.0800 s / img. ETA=0:01:37
[32m[11/05 16:15:58 d2.evaluation.evaluator]: [0mInference done 3828/4952. 0.0800 s / img. ETA=0:01:32
[32m[11/05 16:16:03 d2.evaluation.evaluator]: [0mInference done 3889/4952. 0.0800 s / img. ETA=0:01:27
[32m[11/05 16:16:08 d2.evaluation.evaluator]: [0mInference done 3950/4952. 0.0800 s / img. ETA=0:01:22
[32m[11/05 16:16:13 d2.evaluation.evaluator]: [0mInference done 4012/4952. 0.0800 s / img. ETA=0:01:17
[32m[11/05 16:16:18 d2.evaluation.evaluator]: [0mInference done 4073/4952. 0.0800 s / img. ETA=0:01:12
[32m[11/05 16:16:23 d2.evaluation.evaluator]: [0mInference done 4134/4952. 0.0800 s / img. ETA=0:01:07
[32m[11/05 16:16:28 d2.evaluation.evaluator]: [0mInference done 4196/4952. 0.0800 s / img. ETA=0:01:01
[32m[11/05 16:16:33 d2.evaluation.evaluator]: [0mInference done 4256/4952. 0.0800 s / img. ETA=0:00:57
[32m[11/05 16:16:38 d2.evaluation.evaluator]: [0mInference done 4318/4952. 0.0800 s / img. ETA=0:00:51
[32m[11/05 16:16:43 d2.evaluation.evaluator]: [0mInference done 4380/4952. 0.0800 s / img. ETA=0:00:46
[32m[11/05 16:16:48 d2.evaluation.evaluator]: [0mInference done 4442/4952. 0.0800 s / img. ETA=0:00:41
[32m[11/05 16:16:53 d2.evaluation.evaluator]: [0mInference done 4503/4952. 0.0800 s / img. ETA=0:00:36
[32m[11/05 16:16:58 d2.evaluation.evaluator]: [0mInference done 4564/4952. 0.0800 s / img. ETA=0:00:31
[32m[11/05 16:17:03 d2.evaluation.evaluator]: [0mInference done 4625/4952. 0.0800 s / img. ETA=0:00:26
[32m[11/05 16:17:08 d2.evaluation.evaluator]: [0mInference done 4687/4952. 0.0800 s / img. ETA=0:00:21
[32m[11/05 16:17:13 d2.evaluation.evaluator]: [0mInference done 4749/4952. 0.0800 s / img. ETA=0:00:16
[32m[11/05 16:17:18 d2.evaluation.evaluator]: [0mInference done 4810/4952. 0.0800 s / img. ETA=0:00:11
[32m[11/05 16:17:23 d2.evaluation.evaluator]: [0mInference done 4871/4952. 0.0800 s / img. ETA=0:00:06
[32m[11/05 16:17:28 d2.evaluation.evaluator]: [0mInference done 4933/4952. 0.0800 s / img. ETA=0:00:01
[32m[11/05 16:17:30 d2.evaluation.evaluator]: [0mTotal inference time: 0:06:45.566370 (0.081982 s / img per device, on 1 devices)
[32m[11/05 16:17:30 d2.evaluation.evaluator]: [0mTotal inference pure compute time: 0:06:35 (0.079976 s / img per device, on 1 devices)
[32m[11/05 16:17:30 detectron2]: [0mImage level evaluation complete for WR1_Mixed_Unknowns
[32m[11/05 16:17:30 detectron2]: [0mResults for WR1_Mixed_Unknowns
[32m[11/05 16:17:30 detectron2]: [0mAP for 0: 0.0
[32m[11/05 16:17:30 detectron2]: [0mAP for 1: 0.0
[32m[11/05 16:17:30 detectron2]: [0mAP for 2: 0.0
[32m[11/05 16:17:31 detectron2]: [0mAP for 3: 0.0
[32m[11/05 16:17:31 detectron2]: [0mAP for 4: 0.0
[32m[11/05 16:17:31 detectron2]: [0mAP for 5: 0.0
[32m[11/05 16:17:31 detectron2]: [0mAP for 6: 0.0
[32m[11/05 16:17:31 detectron2]: [0mAP for 7: 0.0
[32m[11/05 16:17:31 detectron2]: [0mAP for 8: 0.0
[32m[11/05 16:17:32 detectron2]: [0mAP for 9: 0.0
[32m[11/05 16:17:32 detectron2]: [0mAP for 10: 0.0
[32m[11/05 16:17:32 detectron2]: [0mAP for 11: 0.0
[32m[11/05 16:17:32 detectron2]: [0mAP for 12: 0.0
[32m[11/05 16:17:32 detectron2]: [0mAP for 13: 0.0
[32m[11/05 16:17:32 detectron2]: [0mAP for 14: 0.0
[32m[11/05 16:17:32 detectron2]: [0mAP for 15: 0.0
[32m[11/05 16:17:33 detectron2]: [0mAP for 16: 0.0
[32m[11/05 16:17:33 detectron2]: [0mAP for 17: 0.0
[32m[11/05 16:17:33 detectron2]: [0mAP for 18: 0.0
[32m[11/05 16:17:33 detectron2]: [0mAP for 19: 0.0
[32m[11/05 16:17:33 detectron2]: [0mmAP: 0.0
[32m[11/05 16:17:33 detectron2]: [0mCombined results for datasets custom_voc_2007_test, WR1_Mixed_Unknowns
[32m[11/05 16:17:33 detectron2]: [0mAP for 0: 0.7963087558746338
[32m[11/05 16:17:33 detectron2]: [0mAP for 1: 0.7675489783287048
[32m[11/05 16:17:33 detectron2]: [0mAP for 2: 0.6266054511070251
[32m[11/05 16:17:33 detectron2]: [0mAP for 3: 0.5853570699691772
[32m[11/05 16:17:33 detectron2]: [0mAP for 4: 0.5181585550308228
[32m[11/05 16:17:33 detectron2]: [0mAP for 5: 0.735167920589447
[32m[11/05 16:17:33 detectron2]: [0mAP for 6: 0.7736961245536804
[32m[11/05 16:17:33 detectron2]: [0mAP for 7: 0.8574226498603821
[32m[11/05 16:17:33 detectron2]: [0mAP for 8: 0.5211853384971619
[32m[11/05 16:17:33 detectron2]: [0mAP for 9: 0.6248680353164673
[32m[11/05 16:17:33 detectron2]: [0mAP for 10: 0.4676196277141571
[32m[11/05 16:17:33 detectron2]: [0mAP for 11: 0.7827640175819397
[32m[11/05 16:17:33 detectron2]: [0mAP for 12: 0.7323558330535889
[32m[11/05 16:17:33 detectron2]: [0mAP for 13: 0.8122451901435852
[32m[11/05 16:17:33 detectron2]: [0mAP for 14: 0.7807108163833618
[32m[11/05 16:17:33 detectron2]: [0mAP for 15: 0.42341139912605286
[32m[11/05 16:17:33 detectron2]: [0mAP for 16: 0.5794138312339783
[32m[11/05 16:17:33 detectron2]: [0mAP for 17: 0.5766554474830627
[32m[11/05 16:17:33 detectron2]: [0mAP for 18: 0.8170406818389893
[32m[11/05 16:17:33 detectron2]: [0mAP for 19: 0.6788268685340881
[32m[11/05 16:17:33 detectron2]: [0mmAP: 0.6728681325912476
[32m[11/05 16:17:35 detectron2]: [0mAP for 0.0: 0.808440625667572
[32m[11/05 16:17:35 detectron2]: [0mAP for 1.0: 0.7770674228668213
[32m[11/05 16:17:35 detectron2]: [0mAP for 2.0: 0.7009527683258057
[32m[11/05 16:17:35 detectron2]: [0mAP for 3.0: 0.5980304479598999
[32m[11/05 16:17:35 detectron2]: [0mAP for 4.0: 0.5711202621459961
[32m[11/05 16:17:35 detectron2]: [0mAP for 5.0: 0.7750421762466431
[32m[11/05 16:17:35 detectron2]: [0mAP for 6.0: 0.7938251495361328
[32m[11/05 16:17:35 detectron2]: [0mAP for 7.0: 0.88051837682724
[32m[11/05 16:17:35 detectron2]: [0mAP for 8.0: 0.5696967840194702
[32m[11/05 16:17:35 detectron2]: [0mAP for 9.0: 0.7881757616996765
[32m[11/05 16:17:35 detectron2]: [0mAP for 10.0: 0.6557427048683167
[32m[11/05 16:17:35 detectron2]: [0mAP for 11.0: 0.8663510680198669
[32m[11/05 16:17:35 detectron2]: [0mAP for 12.0: 0.8503947854042053
[32m[11/05 16:17:35 detectron2]: [0mAP for 13.0: 0.8271569609642029
[32m[11/05 16:17:35 detectron2]: [0mAP for 14.0: 0.788679838180542
[32m[11/05 16:17:35 detectron2]: [0mAP for 15.0: 0.4945039451122284
[32m[11/05 16:17:35 detectron2]: [0mAP for 16.0: 0.6859263181686401
[32m[11/05 16:17:35 detectron2]: [0mAP for 17.0: 0.6699324250221252
[32m[11/05 16:17:35 detectron2]: [0mAP for 18.0: 0.8395748734474182
[32m[11/05 16:17:35 detectron2]: [0mAP for 19.0: 0.7555124163627625
[32m[11/05 16:17:35 detectron2]: [0mmAP: 0.7348322868347168
[32m[11/05 16:17:35 detectron2]: [0mAP for 0.0: 0.8074251413345337
[32m[11/05 16:17:35 detectron2]: [0mAP for 1.0: 0.7757116556167603
[32m[11/05 16:17:35 detectron2]: [0mAP for 2.0: 0.6927441358566284
[32m[11/05 16:17:35 detectron2]: [0mAP for 3.0: 0.5954269170761108
[32m[11/05 16:17:35 detectron2]: [0mAP for 4.0: 0.564703106880188
[32m[11/05 16:17:35 detectron2]: [0mAP for 5.0: 0.7729313373565674
[32m[11/05 16:17:35 detectron2]: [0mAP for 6.0: 0.793378472328186
[32m[11/05 16:17:35 detectron2]: [0mAP for 7.0: 0.8772933483123779
[32m[11/05 16:17:35 detectron2]: [0mAP for 8.0: 0.5648582577705383
[32m[11/05 16:17:35 detectron2]: [0mAP for 9.0: 0.7626721858978271
[32m[11/05 16:17:35 detectron2]: [0mAP for 10.0: 0.6286855936050415
[32m[11/05 16:17:35 detectron2]: [0mAP for 11.0: 0.8550533056259155
[32m[11/05 16:17:35 detectron2]: [0mAP for 12.0: 0.8281086087226868
[32m[11/05 16:17:35 detectron2]: [0mAP for 13.0: 0.8261915445327759
[32m[11/05 16:17:35 detectron2]: [0mAP for 14.0: 0.7882627844810486
[32m[11/05 16:17:35 detectron2]: [0mAP for 15.0: 0.4836746156215668
[32m[11/05 16:17:35 detectron2]: [0mAP for 16.0: 0.6525588631629944
[32m[11/05 16:17:35 detectron2]: [0mAP for 17.0: 0.6623187065124512
[32m[11/05 16:17:35 detectron2]: [0mAP for 18.0: 0.8355180621147156
[32m[11/05 16:17:35 detectron2]: [0mAP for 19.0: 0.7453069686889648
[32m[11/05 16:17:35 detectron2]: [0mmAP: 0.7256411910057068
[32m[11/05 16:17:35 detectron2]: [0mAP for 0.0: 0.8071157336235046
[32m[11/05 16:17:35 detectron2]: [0mAP for 1.0: 0.7746164202690125
[32m[11/05 16:17:35 detectron2]: [0mAP for 2.0: 0.682853102684021
[32m[11/05 16:17:35 detectron2]: [0mAP for 3.0: 0.5943691730499268
[32m[11/05 16:17:35 detectron2]: [0mAP for 4.0: 0.5593255162239075
[32m[11/05 16:17:35 detectron2]: [0mAP for 5.0: 0.7691256999969482
[32m[11/05 16:17:35 detectron2]: [0mAP for 6.0: 0.7894589900970459
[32m[11/05 16:17:35 detectron2]: [0mAP for 7.0: 0.8758093118667603
[32m[11/05 16:17:35 detectron2]: [0mAP for 8.0: 0.5579070448875427
[32m[11/05 16:17:35 detectron2]: [0mAP for 9.0: 0.7415065765380859
[32m[11/05 16:17:35 detectron2]: [0mAP for 10.0: 0.6064231395721436
[32m[11/05 16:17:35 detectron2]: [0mAP for 11.0: 0.8470325469970703
[32m[11/05 16:17:35 detectron2]: [0mAP for 12.0: 0.8122345209121704
[32m[11/05 16:17:35 detectron2]: [0mAP for 13.0: 0.8237715363502502
[32m[11/05 16:17:35 detectron2]: [0mAP for 14.0: 0.7872760891914368
[32m[11/05 16:17:35 detectron2]: [0mAP for 15.0: 0.4740290641784668
[32m[11/05 16:17:35 detectron2]: [0mAP for 16.0: 0.6393458843231201
[32m[11/05 16:17:35 detectron2]: [0mAP for 17.0: 0.6541965007781982
[32m[11/05 16:17:35 detectron2]: [0mAP for 18.0: 0.8327656388282776
[32m[11/05 16:17:35 detectron2]: [0mAP for 19.0: 0.7375192046165466
[32m[11/05 16:17:35 detectron2]: [0mmAP: 0.7183341383934021
[32m[11/05 16:17:35 detectron2]: [0mAP for 0.0: 0.8049968481063843
[32m[11/05 16:17:35 detectron2]: [0mAP for 1.0: 0.7736841440200806
[32m[11/05 16:17:35 detectron2]: [0mAP for 2.0: 0.6740680932998657
[32m[11/05 16:17:35 detectron2]: [0mAP for 3.0: 0.5930513739585876
[32m[11/05 16:17:35 detectron2]: [0mAP for 4.0: 0.5483116507530212
[32m[11/05 16:17:35 detectron2]: [0mAP for 5.0: 0.7650444507598877
[32m[11/05 16:17:35 detectron2]: [0mAP for 6.0: 0.7877101898193359
[32m[11/05 16:17:35 detectron2]: [0mAP for 7.0: 0.8685283660888672
[32m[11/05 16:17:35 detectron2]: [0mAP for 8.0: 0.552049458026886
[32m[11/05 16:17:35 detectron2]: [0mAP for 9.0: 0.7125534415245056
[32m[11/05 16:17:35 detectron2]: [0mAP for 10.0: 0.5847764611244202
[32m[11/05 16:17:35 detectron2]: [0mAP for 11.0: 0.8309327960014343
[32m[11/05 16:17:35 detectron2]: [0mAP for 12.0: 0.7989715337753296
[32m[11/05 16:17:35 detectron2]: [0mAP for 13.0: 0.822482705116272
[32m[11/05 16:17:35 detectron2]: [0mAP for 14.0: 0.7867444753646851
[32m[11/05 16:17:35 detectron2]: [0mAP for 15.0: 0.467607706785202
[32m[11/05 16:17:35 detectron2]: [0mAP for 16.0: 0.624614953994751
[32m[11/05 16:17:35 detectron2]: [0mAP for 17.0: 0.64666748046875
[32m[11/05 16:17:35 detectron2]: [0mAP for 18.0: 0.83025723695755
[32m[11/05 16:17:35 detectron2]: [0mAP for 19.0: 0.7287084460258484
[32m[11/05 16:17:35 detectron2]: [0mmAP: 0.7100880742073059
[32m[11/05 16:17:35 detectron2]: [0mAP for 0.0: 0.8049968481063843
[32m[11/05 16:17:35 detectron2]: [0mAP for 1.0: 0.77252596616745
[32m[11/05 16:17:35 detectron2]: [0mAP for 2.0: 0.6679394245147705
[32m[11/05 16:17:35 detectron2]: [0mAP for 3.0: 0.5917534828186035
[32m[11/05 16:17:35 detectron2]: [0mAP for 4.0: 0.5431528687477112
[32m[11/05 16:17:35 detectron2]: [0mAP for 5.0: 0.763018786907196
[32m[11/05 16:17:35 detectron2]: [0mAP for 6.0: 0.7866762280464172
[32m[11/05 16:17:35 detectron2]: [0mAP for 7.0: 0.8674986362457275
[32m[11/05 16:17:35 detectron2]: [0mAP for 8.0: 0.5460099577903748
[32m[11/05 16:17:35 detectron2]: [0mAP for 9.0: 0.6895670294761658
[32m[11/05 16:17:35 detectron2]: [0mAP for 10.0: 0.5694316029548645
[32m[11/05 16:17:35 detectron2]: [0mAP for 11.0: 0.8236904144287109
[32m[11/05 16:17:35 detectron2]: [0mAP for 12.0: 0.7914615273475647
[32m[11/05 16:17:35 detectron2]: [0mAP for 13.0: 0.8206184506416321
[32m[11/05 16:17:36 detectron2]: [0mAP for 14.0: 0.7857439517974854
[32m[11/05 16:17:36 detectron2]: [0mAP for 15.0: 0.4596113860607147
[32m[11/05 16:17:36 detectron2]: [0mAP for 16.0: 0.6189888119697571
[32m[11/05 16:17:36 detectron2]: [0mAP for 17.0: 0.6328640580177307
[32m[11/05 16:17:36 detectron2]: [0mAP for 18.0: 0.827954888343811
[32m[11/05 16:17:36 detectron2]: [0mAP for 19.0: 0.7236618995666504
[32m[11/05 16:17:36 detectron2]: [0mmAP: 0.7043583989143372
[32m[11/05 16:17:36 detectron2]: [0mAP for 0.0: 0.8033401370048523
[32m[11/05 16:17:36 detectron2]: [0mAP for 1.0: 0.7720251679420471
[32m[11/05 16:17:36 detectron2]: [0mAP for 2.0: 0.662475049495697
[32m[11/05 16:17:36 detectron2]: [0mAP for 3.0: 0.5906260013580322
[32m[11/05 16:17:36 detectron2]: [0mAP for 4.0: 0.5390602350234985
[32m[11/05 16:17:36 detectron2]: [0mAP for 5.0: 0.7610435485839844
[32m[11/05 16:17:36 detectron2]: [0mAP for 6.0: 0.7842805981636047
[32m[11/05 16:17:36 detectron2]: [0mAP for 7.0: 0.8644418716430664
[32m[11/05 16:17:36 detectron2]: [0mAP for 8.0: 0.5414519309997559
[32m[11/05 16:17:36 detectron2]: [0mAP for 9.0: 0.683279275894165
[32m[11/05 16:17:36 detectron2]: [0mAP for 10.0: 0.5558233261108398
[32m[11/05 16:17:36 detectron2]: [0mAP for 11.0: 0.8159647583961487
[32m[11/05 16:17:36 detectron2]: [0mAP for 12.0: 0.7788593769073486
[32m[11/05 16:17:36 detectron2]: [0mAP for 13.0: 0.819983720779419
[32m[11/05 16:17:36 detectron2]: [0mAP for 14.0: 0.7851929068565369
[32m[11/05 16:17:36 detectron2]: [0mAP for 15.0: 0.4529585838317871
[32m[11/05 16:17:36 detectron2]: [0mAP for 16.0: 0.613614022731781
[32m[11/05 16:17:36 detectron2]: [0mAP for 17.0: 0.6216874122619629
[32m[11/05 16:17:36 detectron2]: [0mAP for 18.0: 0.8259549736976624
[32m[11/05 16:17:36 detectron2]: [0mAP for 19.0: 0.7115657329559326
[32m[11/05 16:17:36 detectron2]: [0mmAP: 0.6991814374923706
[32m[11/05 16:17:36 detectron2]: [0mAP for 0.0: 0.7981526851654053
[32m[11/05 16:17:36 detectron2]: [0mAP for 1.0: 0.7715327739715576
[32m[11/05 16:17:36 detectron2]: [0mAP for 2.0: 0.6526334285736084
[32m[11/05 16:17:36 detectron2]: [0mAP for 3.0: 0.5900876522064209
[32m[11/05 16:17:36 detectron2]: [0mAP for 4.0: 0.5349059700965881
[32m[11/05 16:17:36 detectron2]: [0mAP for 5.0: 0.756045937538147
[32m[11/05 16:17:36 detectron2]: [0mAP for 6.0: 0.782234251499176
[32m[11/05 16:17:36 detectron2]: [0mAP for 7.0: 0.8632158637046814
[32m[11/05 16:17:36 detectron2]: [0mAP for 8.0: 0.5364787578582764
[32m[11/05 16:17:36 detectron2]: [0mAP for 9.0: 0.6754810810089111
[32m[11/05 16:17:36 detectron2]: [0mAP for 10.0: 0.536702573299408
[32m[11/05 16:17:36 detectron2]: [0mAP for 11.0: 0.8106412887573242
[32m[11/05 16:17:36 detectron2]: [0mAP for 12.0: 0.7694507837295532
[32m[11/05 16:17:36 detectron2]: [0mAP for 13.0: 0.8182892203330994
[32m[11/05 16:17:36 detectron2]: [0mAP for 14.0: 0.7837110757827759
[32m[11/05 16:17:36 detectron2]: [0mAP for 15.0: 0.4455527365207672
[32m[11/05 16:17:36 detectron2]: [0mAP for 16.0: 0.610645592212677
[32m[11/05 16:17:36 detectron2]: [0mAP for 17.0: 0.6072170734405518
[32m[11/05 16:17:36 detectron2]: [0mAP for 18.0: 0.8240447640419006
[32m[11/05 16:17:36 detectron2]: [0mAP for 19.0: 0.7019145488739014
[32m[11/05 16:17:36 detectron2]: [0mmAP: 0.6934468746185303
[32m[11/05 16:17:36 detectron2]: [0mAP for 0.0: 0.7981526851654053
[32m[11/05 16:17:36 detectron2]: [0mAP for 1.0: 0.7703367471694946
[32m[11/05 16:17:36 detectron2]: [0mAP for 2.0: 0.6466332077980042
[32m[11/05 16:17:36 detectron2]: [0mAP for 3.0: 0.5887355804443359
[32m[11/05 16:17:36 detectron2]: [0mAP for 4.0: 0.5296403169631958
[32m[11/05 16:17:36 detectron2]: [0mAP for 5.0: 0.752450704574585
[32m[11/05 16:17:36 detectron2]: [0mAP for 6.0: 0.7786900997161865
[32m[11/05 16:17:36 detectron2]: [0mAP for 7.0: 0.8613957762718201
[32m[11/05 16:17:36 detectron2]: [0mAP for 8.0: 0.5334128737449646
[32m[11/05 16:17:36 detectron2]: [0mAP for 9.0: 0.667311429977417
[32m[11/05 16:17:36 detectron2]: [0mAP for 10.0: 0.522779107093811
[32m[11/05 16:17:36 detectron2]: [0mAP for 11.0: 0.8044999837875366
[32m[11/05 16:17:36 detectron2]: [0mAP for 12.0: 0.7649497389793396
[32m[11/05 16:17:36 detectron2]: [0mAP for 13.0: 0.8165357112884521
[32m[11/05 16:17:36 detectron2]: [0mAP for 14.0: 0.7826014161109924
[32m[11/05 16:17:36 detectron2]: [0mAP for 15.0: 0.43944063782691956
[32m[11/05 16:17:36 detectron2]: [0mAP for 16.0: 0.6019951701164246
[32m[11/05 16:17:36 detectron2]: [0mAP for 17.0: 0.6002336740493774
[32m[11/05 16:17:36 detectron2]: [0mAP for 18.0: 0.8227123022079468
[32m[11/05 16:17:36 detectron2]: [0mAP for 19.0: 0.6941025257110596
[32m[11/05 16:17:36 detectron2]: [0mmAP: 0.6888304948806763
[32m[11/05 16:17:36 detectron2]: [0mAP for 0.0: 0.7972252368927002
[32m[11/05 16:17:36 detectron2]: [0mAP for 1.0: 0.7689640522003174
[32m[11/05 16:17:36 detectron2]: [0mAP for 2.0: 0.6400614380836487
[32m[11/05 16:17:36 detectron2]: [0mAP for 3.0: 0.5880363583564758
[32m[11/05 16:17:36 detectron2]: [0mAP for 4.0: 0.5246222019195557
[32m[11/05 16:17:36 detectron2]: [0mAP for 5.0: 0.7388509511947632
[32m[11/05 16:17:36 detectron2]: [0mAP for 6.0: 0.7777529358863831
[32m[11/05 16:17:36 detectron2]: [0mAP for 7.0: 0.859672486782074
[32m[11/05 16:17:36 detectron2]: [0mAP for 8.0: 0.5305948257446289
[32m[11/05 16:17:36 detectron2]: [0mAP for 9.0: 0.6403424739837646
[32m[11/05 16:17:36 detectron2]: [0mAP for 10.0: 0.49243098497390747
[32m[11/05 16:17:36 detectron2]: [0mAP for 11.0: 0.793095052242279
[32m[11/05 16:17:36 detectron2]: [0mAP for 12.0: 0.7520943284034729
[32m[11/05 16:17:36 detectron2]: [0mAP for 13.0: 0.814483642578125
[32m[11/05 16:17:36 detectron2]: [0mAP for 14.0: 0.7821661233901978
[32m[11/05 16:17:36 detectron2]: [0mAP for 15.0: 0.43383291363716125
[32m[11/05 16:17:36 detectron2]: [0mAP for 16.0: 0.5963069200515747
[32m[11/05 16:17:36 detectron2]: [0mAP for 17.0: 0.5941689014434814
[32m[11/05 16:17:36 detectron2]: [0mAP for 18.0: 0.8211178183555603
[32m[11/05 16:17:36 detectron2]: [0mAP for 19.0: 0.6900832653045654
[32m[11/05 16:17:36 detectron2]: [0mmAP: 0.6817951798439026
[32m[11/05 16:17:37 detectron2]: [0mAP for 0.0: 0.7972252368927002
[32m[11/05 16:17:37 detectron2]: [0mAP for 1.0: 0.7681931853294373
[32m[11/05 16:17:37 detectron2]: [0mAP for 2.0: 0.631178617477417
[32m[11/05 16:17:37 detectron2]: [0mAP for 3.0: 0.5863597393035889
[32m[11/05 16:17:37 detectron2]: [0mAP for 4.0: 0.5224187970161438
[32m[11/05 16:17:37 detectron2]: [0mAP for 5.0: 0.7380679845809937
[32m[11/05 16:17:37 detectron2]: [0mAP for 6.0: 0.7751660943031311
[32m[11/05 16:17:37 detectron2]: [0mAP for 7.0: 0.8585228323936462
[32m[11/05 16:17:37 detectron2]: [0mAP for 8.0: 0.5277168154716492
[32m[11/05 16:17:37 detectron2]: [0mAP for 9.0: 0.6364344954490662
[32m[11/05 16:17:37 detectron2]: [0mAP for 10.0: 0.47778645157814026
[32m[11/05 16:17:37 detectron2]: [0mAP for 11.0: 0.788770318031311
[32m[11/05 16:17:37 detectron2]: [0mAP for 12.0: 0.7422476410865784
[32m[11/05 16:17:37 detectron2]: [0mAP for 13.0: 0.8132762312889099
[32m[11/05 16:17:37 detectron2]: [0mAP for 14.0: 0.7815334796905518
[32m[11/05 16:17:37 detectron2]: [0mAP for 15.0: 0.4281567335128784
[32m[11/05 16:17:37 detectron2]: [0mAP for 16.0: 0.5900235772132874
[32m[11/05 16:17:37 detectron2]: [0mAP for 17.0: 0.5849940776824951
[32m[11/05 16:17:37 detectron2]: [0mAP for 18.0: 0.8193250894546509
[32m[11/05 16:17:37 detectron2]: [0mAP for 19.0: 0.6870383024215698
[32m[11/05 16:17:37 detectron2]: [0mmAP: 0.6777218580245972
[32m[11/05 16:17:37 detectron2]: [0mAP for 0.0: 0.7963087558746338
[32m[11/05 16:17:37 detectron2]: [0mAP for 1.0: 0.7675489783287048
[32m[11/05 16:17:37 detectron2]: [0mAP for 2.0: 0.6266054511070251
[32m[11/05 16:17:37 detectron2]: [0mAP for 3.0: 0.5853570699691772
[32m[11/05 16:17:37 detectron2]: [0mAP for 4.0: 0.5181585550308228
[32m[11/05 16:17:37 detectron2]: [0mAP for 5.0: 0.735167920589447
[32m[11/05 16:17:37 detectron2]: [0mAP for 6.0: 0.7736961245536804
[32m[11/05 16:17:37 detectron2]: [0mAP for 7.0: 0.8574226498603821
[32m[11/05 16:17:37 detectron2]: [0mAP for 8.0: 0.5211853384971619
[32m[11/05 16:17:37 detectron2]: [0mAP for 9.0: 0.6248680353164673
[32m[11/05 16:17:37 detectron2]: [0mAP for 10.0: 0.4676196277141571
[32m[11/05 16:17:37 detectron2]: [0mAP for 11.0: 0.7827640175819397
[32m[11/05 16:17:37 detectron2]: [0mAP for 12.0: 0.7323558330535889
[32m[11/05 16:17:37 detectron2]: [0mAP for 13.0: 0.8122451901435852
[32m[11/05 16:17:37 detectron2]: [0mAP for 14.0: 0.7807108163833618
[32m[11/05 16:17:37 detectron2]: [0mAP for 15.0: 0.42341139912605286
[32m[11/05 16:17:37 detectron2]: [0mAP for 16.0: 0.5794138312339783
[32m[11/05 16:17:37 detectron2]: [0mAP for 17.0: 0.5766554474830627
[32m[11/05 16:17:37 detectron2]: [0mAP for 18.0: 0.8170406818389893
[32m[11/05 16:17:37 detectron2]: [0mAP for 19.0: 0.6788268685340881
[32m[11/05 16:17:37 detectron2]: [0mmAP: 0.6728681325912476
Hello Deepak,
I am unable to replicate the issue you are facing about the multi-gpu evaluation which I have verified both by observing the gpu usage in nvidia-smi
and from the logs
[11/22 01:07:31 d2.evaluation.evaluator]: Total inference time: 0:01:12.698808 (0.118402 s / img per device, on 8 devices)
[11/22 01:07:31 d2.evaluation.evaluator]: Total inference pure compute time: 0:01:10 (0.114994 s / img per device, on 8 devices)
If you are still facing this issue, please make sure you are on the most recent commit and share the exact command used along with a screen shot of nvidia-smi
.
Regarding the meaning of the log outputs, please have a look at the WIC curve in Figure 5 of the paper. Each average precision (AP) value corresponds to a specific object's AP on a specific wilderness level on the x-axis as in Figure 5. These values are later used in the formula mentioned in the paper to calculate the wilderness impact. Also please make sure you are on the latest commit for improved log output, your previous log output corresponds to a stale commit. I hope this helps.
Hello Akshay, Thank you for the prompt response. I noticed that my previous issue submission was on a stale commit. I updated my cloned repo to the latest commit and tried to reproduce the result on Faster-RCNN. [Table. 1 of your paper] Table 1. shows that for Faster RCNN, we should have 81.86% mAP on the PASCAL test, and 77.09% for WR1. I tried the following command for training:
python main.py --num-gpus 4 --config-file training_configs/faster_rcnn_R_50_FPN.yaml
During test it throws the error. Notice that its on 4 GPUS. Error log:
==========================================
SLURM_JOB_ID = 265254
SLURM_NODELIST = gnode02
SLURM_JOB_GPUS = 0,1,2,3
==========================================
Command Line Args: Namespace(config_file='training_configs/faster_rcnn_R_50_FPN.yaml', dist_url='tcp://127.0.0.1:50712', eval_only=False, machine_rank=0, num_gpus=4, num_machines=1, opts=[], resume=False)
[32m[11/26 15:46:20 detectron2]: [0mRank of current process: 0. World size: 4
[32m[11/26 15:46:22 detectron2]: [0mEnvironment info:
---------------------- -------------------------------------------------------------------------------
sys.platform linux
Python 3.7.3 | packaged by conda-forge | (default, Jul 1 2019, 21:52:21) [GCC 7.3.0]
numpy 1.16.4
detectron2 0.3 @/home/dksingh/inseg/detectron2/detectron2
Compiler GCC 5.5
CUDA compiler CUDA 10.2
detectron2 arch flags 6.1
DETECTRON2_ENV_MODULE <not set>
PyTorch 1.6.0 @/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch
PyTorch debug build False
GPU available True
GPU 0,1,2,3 GeForce GTX 1080 Ti (arch=6.1)
CUDA_HOME /usr/local/cuda-10.2
Pillow 7.1.2
torchvision 0.7.0 @/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torchvision
torchvision arch flags 3.5, 5.0, 6.0, 7.0, 7.5
fvcore 0.1.2.post20201103
cv2 4.1.0
---------------------- -------------------------------------------------------------------------------
PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2019.0.4 Product Build 20190411 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 10.2
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
- CuDNN 7.6.5
- Magma 2.5.2
- Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,
[32m[11/26 15:46:22 detectron2]: [0mCommand line arguments: Namespace(config_file='training_configs/faster_rcnn_R_50_FPN.yaml', dist_url='tcp://127.0.0.1:50712', eval_only=False, machine_rank=0, num_gpus=4, num_machines=1, opts=[], resume=False)
[32m[11/26 15:46:22 detectron2]: [0mContents of args.config_file=training_configs/faster_rcnn_R_50_FPN.yaml:
# Configuration for training with 4 gpus
_BASE_: "~/detectron2/detectron2/configs/Base-RCNN-FPN.yaml"
MODEL:
WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
MASK_ON: False
RESNETS:
DEPTH: 50
ROI_HEADS:
NUM_CLASSES: 20
DATASETS:
TRAIN: ('custom_voc_2007_train','custom_voc_2007_val','custom_voc_2012_train','custom_voc_2012_val',)
TEST: ('custom_voc_2007_test','WR1_Mixed_Unknowns')
# TEST: ('custom_voc_2007_test','Mixed_Unknowns')
SOLVER:
BASE_LR: 0.01
STEPS: (24000, 32000)
MAX_ITER: 36000
WARMUP_ITERS: 100
OUTPUT_DIR: /ssd_scratch/cvit/dksingh/overlooked_elephant/fasterrcnn/fasterrcnn_36k_iters/
[32m[11/26 15:46:22 detectron2]: [0mRunning with full config:
CUDNN_BENCHMARK: False
DATALOADER:
ASPECT_RATIO_GROUPING: True
FILTER_EMPTY_ANNOTATIONS: True
NUM_WORKERS: 4
REPEAT_THRESHOLD: 0.0
SAMPLER_TRAIN: TrainingSampler
DATASETS:
PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
PRECOMPUTED_PROPOSAL_TOPK_TRAIN: 2000
PROPOSAL_FILES_TEST: ()
PROPOSAL_FILES_TRAIN: ()
TEST: ('custom_voc_2007_test', 'WR1_Mixed_Unknowns')
TRAIN: ('custom_voc_2007_train', 'custom_voc_2007_val', 'custom_voc_2012_train', 'custom_voc_2012_val')
GLOBAL:
HACK: 1.0
INPUT:
CROP:
ENABLED: False
SIZE: [0.9, 0.9]
TYPE: relative_range
FORMAT: BGR
MASK_FORMAT: polygon
MAX_SIZE_TEST: 1333
MAX_SIZE_TRAIN: 1333
MIN_SIZE_TEST: 800
MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)
MIN_SIZE_TRAIN_SAMPLING: choice
RANDOM_FLIP: horizontal
MODEL:
ANCHOR_GENERATOR:
ANGLES: [[-90, 0, 90]]
ASPECT_RATIOS: [[0.5, 1.0, 2.0]]
NAME: DefaultAnchorGenerator
OFFSET: 0.0
SIZES: [[32], [64], [128], [256], [512]]
BACKBONE:
FREEZE_AT: 2
NAME: build_resnet_fpn_backbone
DEVICE: cuda
FPN:
FUSE_TYPE: sum
IN_FEATURES: ['res2', 'res3', 'res4', 'res5']
NORM:
OUT_CHANNELS: 256
KEYPOINT_ON: False
LOAD_PROPOSALS: False
MASK_ON: False
META_ARCHITECTURE: GeneralizedRCNN
PANOPTIC_FPN:
COMBINE:
ENABLED: True
INSTANCES_CONFIDENCE_THRESH: 0.5
OVERLAP_THRESH: 0.5
STUFF_AREA_LIMIT: 4096
INSTANCE_LOSS_WEIGHT: 1.0
PIXEL_MEAN: [103.53, 116.28, 123.675]
PIXEL_STD: [1.0, 1.0, 1.0]
PROPOSAL_GENERATOR:
MIN_SIZE: 0
NAME: RPN
RESNETS:
DEFORM_MODULATED: False
DEFORM_NUM_GROUPS: 1
DEFORM_ON_PER_STAGE: [False, False, False, False]
DEPTH: 50
NORM: FrozenBN
NUM_GROUPS: 1
OUT_FEATURES: ['res2', 'res3', 'res4', 'res5']
RES2_OUT_CHANNELS: 256
RES5_DILATION: 1
STEM_OUT_CHANNELS: 64
STRIDE_IN_1X1: True
WIDTH_PER_GROUP: 64
RETINANET:
BBOX_REG_LOSS_TYPE: smooth_l1
BBOX_REG_WEIGHTS: (1.0, 1.0, 1.0, 1.0)
FOCAL_LOSS_ALPHA: 0.25
FOCAL_LOSS_GAMMA: 2.0
IN_FEATURES: ['p3', 'p4', 'p5', 'p6', 'p7']
IOU_LABELS: [0, -1, 1]
IOU_THRESHOLDS: [0.4, 0.5]
NMS_THRESH_TEST: 0.5
NORM:
NUM_CLASSES: 80
NUM_CONVS: 4
PRIOR_PROB: 0.01
SCORE_THRESH_TEST: 0.05
SMOOTH_L1_LOSS_BETA: 0.1
TOPK_CANDIDATES_TEST: 1000
ROI_BOX_CASCADE_HEAD:
BBOX_REG_WEIGHTS: ((10.0, 10.0, 5.0, 5.0), (20.0, 20.0, 10.0, 10.0), (30.0, 30.0, 15.0, 15.0))
IOUS: (0.5, 0.6, 0.7)
ROI_BOX_HEAD:
BBOX_REG_LOSS_TYPE: smooth_l1
BBOX_REG_LOSS_WEIGHT: 1.0
BBOX_REG_WEIGHTS: (10.0, 10.0, 5.0, 5.0)
CLS_AGNOSTIC_BBOX_REG: False
CONV_DIM: 256
FC_DIM: 1024
NAME: FastRCNNConvFCHead
NORM:
NUM_CONV: 0
NUM_FC: 2
POOLER_RESOLUTION: 7
POOLER_SAMPLING_RATIO: 0
POOLER_TYPE: ROIAlignV2
SMOOTH_L1_BETA: 0.0
TRAIN_ON_PRED_BOXES: False
ROI_HEADS:
BATCH_SIZE_PER_IMAGE: 512
IN_FEATURES: ['p2', 'p3', 'p4', 'p5']
IOU_LABELS: [0, 1]
IOU_THRESHOLDS: [0.5]
NAME: StandardROIHeads
NMS_THRESH_TEST: 0.5
NUM_CLASSES: 20
POSITIVE_FRACTION: 0.25
PROPOSAL_APPEND_GT: True
SCORE_THRESH_TEST: 0.05
ROI_KEYPOINT_HEAD:
CONV_DIMS: (512, 512, 512, 512, 512, 512, 512, 512)
LOSS_WEIGHT: 1.0
MIN_KEYPOINTS_PER_IMAGE: 1
NAME: KRCNNConvDeconvUpsampleHead
NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS: True
NUM_KEYPOINTS: 17
POOLER_RESOLUTION: 14
POOLER_SAMPLING_RATIO: 0
POOLER_TYPE: ROIAlignV2
ROI_MASK_HEAD:
CLS_AGNOSTIC_MASK: False
CONV_DIM: 256
NAME: MaskRCNNConvUpsampleHead
NORM:
NUM_CONV: 4
POOLER_RESOLUTION: 14
POOLER_SAMPLING_RATIO: 0
POOLER_TYPE: ROIAlignV2
RPN:
BATCH_SIZE_PER_IMAGE: 256
BBOX_REG_LOSS_TYPE: smooth_l1
BBOX_REG_LOSS_WEIGHT: 1.0
BBOX_REG_WEIGHTS: (1.0, 1.0, 1.0, 1.0)
BOUNDARY_THRESH: -1
HEAD_NAME: StandardRPNHead
IN_FEATURES: ['p2', 'p3', 'p4', 'p5', 'p6']
IOU_LABELS: [0, -1, 1]
IOU_THRESHOLDS: [0.3, 0.7]
LOSS_WEIGHT: 1.0
NMS_THRESH: 0.7
POSITIVE_FRACTION: 0.5
POST_NMS_TOPK_TEST: 1000
POST_NMS_TOPK_TRAIN: 1000
PRE_NMS_TOPK_TEST: 1000
PRE_NMS_TOPK_TRAIN: 2000
SMOOTH_L1_BETA: 0.0
SEM_SEG_HEAD:
COMMON_STRIDE: 4
CONVS_DIM: 128
IGNORE_VALUE: 255
IN_FEATURES: ['p2', 'p3', 'p4', 'p5']
LOSS_WEIGHT: 1.0
NAME: SemSegFPNHead
NORM: GN
NUM_CLASSES: 54
WEIGHTS: detectron2://ImageNetPretrained/MSRA/R-50.pkl
OUTPUT_DIR: /ssd_scratch/cvit/dksingh/overlooked_elephant/fasterrcnn/fasterrcnn_36k_iters/
SEED: -1
SOLVER:
AMP:
ENABLED: False
BASE_LR: 0.01
BIAS_LR_FACTOR: 1.0
CHECKPOINT_PERIOD: 5000
CLIP_GRADIENTS:
CLIP_TYPE: value
CLIP_VALUE: 1.0
ENABLED: False
NORM_TYPE: 2.0
GAMMA: 0.1
IMS_PER_BATCH: 16
LR_SCHEDULER_NAME: WarmupMultiStepLR
MAX_ITER: 36000
MOMENTUM: 0.9
NESTEROV: False
REFERENCE_WORLD_SIZE: 0
STEPS: (24000, 32000)
WARMUP_FACTOR: 0.001
WARMUP_ITERS: 100
WARMUP_METHOD: linear
WEIGHT_DECAY: 0.0001
WEIGHT_DECAY_BIAS: 0.0001
WEIGHT_DECAY_NORM: 0.0
TEST:
AUG:
ENABLED: False
FLIP: True
MAX_SIZE: 4000
MIN_SIZES: (400, 500, 600, 700, 800, 900, 1000, 1100, 1200)
DETECTIONS_PER_IMAGE: 100
EVAL_PERIOD: 0
EXPECTED_RESULTS: []
KEYPOINT_OKS_SIGMAS: []
PRECISE_BN:
ENABLED: False
NUM_ITER: 200
VERSION: 2
VIS_PERIOD: 0
[32m[11/26 15:46:22 detectron2]: [0mFull config saved to /ssd_scratch/cvit/dksingh/overlooked_elephant/fasterrcnn/fasterrcnn_36k_iters/config.yaml
[32m[11/26 15:46:22 d2.utils.env]: [0mUsing a generated random seed 22657084
[32m[11/26 15:46:23 detectron2]: [0mModel:
GeneralizedRCNN(
(backbone): FPN(
(fpn_lateral2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(top_block): LastLevelMaxPool()
(bottom_up): ResNet(
(stem): BasicStem(
(conv1): Conv2d(
3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
)
(res2): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv1): Conv2d(
64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
)
(res3): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv1): Conv2d(
256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(3): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
)
(res4): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
(conv1): Conv2d(
512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(3): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(4): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(5): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
)
(res5): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
(conv1): Conv2d(
1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
)
)
)
(proposal_generator): RPN(
(rpn_head): StandardRPNHead(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(objectness_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1))
(anchor_deltas): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1))
)
(anchor_generator): DefaultAnchorGenerator(
(cell_anchors): BufferList()
)
)
(roi_heads): StandardROIHeads(
(box_pooler): ROIPooler(
(level_poolers): ModuleList(
(0): ROIAlign(output_size=(7, 7), spatial_scale=0.25, sampling_ratio=0, aligned=True)
(1): ROIAlign(output_size=(7, 7), spatial_scale=0.125, sampling_ratio=0, aligned=True)
(2): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True)
(3): ROIAlign(output_size=(7, 7), spatial_scale=0.03125, sampling_ratio=0, aligned=True)
)
)
(box_head): FastRCNNConvFCHead(
(flatten): Flatten()
(fc1): Linear(in_features=12544, out_features=1024, bias=True)
(fc_relu1): ReLU()
(fc2): Linear(in_features=1024, out_features=1024, bias=True)
(fc_relu2): ReLU()
)
(box_predictor): FastRCNNOutputLayers(
(cls_score): Linear(in_features=1024, out_features=21, bias=True)
(bbox_pred): Linear(in_features=1024, out_features=80, bias=True)
)
)
)
[32m[11/26 15:46:23 fvcore.common.checkpoint]: [0mLoading checkpoint from detectron2://ImageNetPretrained/MSRA/R-50.pkl
[32m[11/26 15:46:24 fvcore.common.file_io]: [0mURL https://dl.fbaipublicfiles.com/detectron2/ImageNetPretrained/MSRA/R-50.pkl cached in /home/dksingh/.torch/fvcore_cache/detectron2/ImageNetPretrained/MSRA/R-50.pkl
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mRemapping C2 weights ......
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.conv1.norm.bias loaded from res2_0_branch2a_bn_beta of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.conv1.norm.running_mean loaded from res2_0_branch2a_bn_running_mean of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.conv1.norm.running_var loaded from res2_0_branch2a_bn_running_var of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.conv1.norm.weight loaded from res2_0_branch2a_bn_gamma of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.conv1.weight loaded from res2_0_branch2a_w of shape (64, 64, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.conv2.norm.bias loaded from res2_0_branch2b_bn_beta of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.conv2.norm.running_mean loaded from res2_0_branch2b_bn_running_mean of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.conv2.norm.running_var loaded from res2_0_branch2b_bn_running_var of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.conv2.norm.weight loaded from res2_0_branch2b_bn_gamma of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.conv2.weight loaded from res2_0_branch2b_w of shape (64, 64, 3, 3)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.conv3.norm.bias loaded from res2_0_branch2c_bn_beta of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.conv3.norm.running_mean loaded from res2_0_branch2c_bn_running_mean of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.conv3.norm.running_var loaded from res2_0_branch2c_bn_running_var of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.conv3.norm.weight loaded from res2_0_branch2c_bn_gamma of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.conv3.weight loaded from res2_0_branch2c_w of shape (256, 64, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.shortcut.norm.bias loaded from res2_0_branch1_bn_beta of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.shortcut.norm.running_mean loaded from res2_0_branch1_bn_running_mean of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.shortcut.norm.running_var loaded from res2_0_branch1_bn_running_var of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.shortcut.norm.weight loaded from res2_0_branch1_bn_gamma of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.0.shortcut.weight loaded from res2_0_branch1_w of shape (256, 64, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.1.conv1.norm.bias loaded from res2_1_branch2a_bn_beta of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.1.conv1.norm.running_mean loaded from res2_1_branch2a_bn_running_mean of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.1.conv1.norm.running_var loaded from res2_1_branch2a_bn_running_var of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.1.conv1.norm.weight loaded from res2_1_branch2a_bn_gamma of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.1.conv1.weight loaded from res2_1_branch2a_w of shape (64, 256, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.1.conv2.norm.bias loaded from res2_1_branch2b_bn_beta of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.1.conv2.norm.running_mean loaded from res2_1_branch2b_bn_running_mean of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.1.conv2.norm.running_var loaded from res2_1_branch2b_bn_running_var of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.1.conv2.norm.weight loaded from res2_1_branch2b_bn_gamma of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.1.conv2.weight loaded from res2_1_branch2b_w of shape (64, 64, 3, 3)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.1.conv3.norm.bias loaded from res2_1_branch2c_bn_beta of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.1.conv3.norm.running_mean loaded from res2_1_branch2c_bn_running_mean of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.1.conv3.norm.running_var loaded from res2_1_branch2c_bn_running_var of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.1.conv3.norm.weight loaded from res2_1_branch2c_bn_gamma of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.1.conv3.weight loaded from res2_1_branch2c_w of shape (256, 64, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.2.conv1.norm.bias loaded from res2_2_branch2a_bn_beta of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.2.conv1.norm.running_mean loaded from res2_2_branch2a_bn_running_mean of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.2.conv1.norm.running_var loaded from res2_2_branch2a_bn_running_var of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.2.conv1.norm.weight loaded from res2_2_branch2a_bn_gamma of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.2.conv1.weight loaded from res2_2_branch2a_w of shape (64, 256, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.2.conv2.norm.bias loaded from res2_2_branch2b_bn_beta of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.2.conv2.norm.running_mean loaded from res2_2_branch2b_bn_running_mean of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.2.conv2.norm.running_var loaded from res2_2_branch2b_bn_running_var of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.2.conv2.norm.weight loaded from res2_2_branch2b_bn_gamma of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.2.conv2.weight loaded from res2_2_branch2b_w of shape (64, 64, 3, 3)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.2.conv3.norm.bias loaded from res2_2_branch2c_bn_beta of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.2.conv3.norm.running_mean loaded from res2_2_branch2c_bn_running_mean of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.2.conv3.norm.running_var loaded from res2_2_branch2c_bn_running_var of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.2.conv3.norm.weight loaded from res2_2_branch2c_bn_gamma of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res2.2.conv3.weight loaded from res2_2_branch2c_w of shape (256, 64, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.conv1.norm.bias loaded from res3_0_branch2a_bn_beta of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.conv1.norm.running_mean loaded from res3_0_branch2a_bn_running_mean of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.conv1.norm.running_var loaded from res3_0_branch2a_bn_running_var of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.conv1.norm.weight loaded from res3_0_branch2a_bn_gamma of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.conv1.weight loaded from res3_0_branch2a_w of shape (128, 256, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.conv2.norm.bias loaded from res3_0_branch2b_bn_beta of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.conv2.norm.running_mean loaded from res3_0_branch2b_bn_running_mean of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.conv2.norm.running_var loaded from res3_0_branch2b_bn_running_var of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.conv2.norm.weight loaded from res3_0_branch2b_bn_gamma of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.conv2.weight loaded from res3_0_branch2b_w of shape (128, 128, 3, 3)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.conv3.norm.bias loaded from res3_0_branch2c_bn_beta of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.conv3.norm.running_mean loaded from res3_0_branch2c_bn_running_mean of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.conv3.norm.running_var loaded from res3_0_branch2c_bn_running_var of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.conv3.norm.weight loaded from res3_0_branch2c_bn_gamma of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.conv3.weight loaded from res3_0_branch2c_w of shape (512, 128, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.shortcut.norm.bias loaded from res3_0_branch1_bn_beta of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.shortcut.norm.running_mean loaded from res3_0_branch1_bn_running_mean of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.shortcut.norm.running_var loaded from res3_0_branch1_bn_running_var of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.shortcut.norm.weight loaded from res3_0_branch1_bn_gamma of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.0.shortcut.weight loaded from res3_0_branch1_w of shape (512, 256, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.1.conv1.norm.bias loaded from res3_1_branch2a_bn_beta of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.1.conv1.norm.running_mean loaded from res3_1_branch2a_bn_running_mean of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.1.conv1.norm.running_var loaded from res3_1_branch2a_bn_running_var of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.1.conv1.norm.weight loaded from res3_1_branch2a_bn_gamma of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.1.conv1.weight loaded from res3_1_branch2a_w of shape (128, 512, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.1.conv2.norm.bias loaded from res3_1_branch2b_bn_beta of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.1.conv2.norm.running_mean loaded from res3_1_branch2b_bn_running_mean of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.1.conv2.norm.running_var loaded from res3_1_branch2b_bn_running_var of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.1.conv2.norm.weight loaded from res3_1_branch2b_bn_gamma of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.1.conv2.weight loaded from res3_1_branch2b_w of shape (128, 128, 3, 3)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.1.conv3.norm.bias loaded from res3_1_branch2c_bn_beta of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.1.conv3.norm.running_mean loaded from res3_1_branch2c_bn_running_mean of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.1.conv3.norm.running_var loaded from res3_1_branch2c_bn_running_var of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.1.conv3.norm.weight loaded from res3_1_branch2c_bn_gamma of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.1.conv3.weight loaded from res3_1_branch2c_w of shape (512, 128, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.2.conv1.norm.bias loaded from res3_2_branch2a_bn_beta of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.2.conv1.norm.running_mean loaded from res3_2_branch2a_bn_running_mean of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.2.conv1.norm.running_var loaded from res3_2_branch2a_bn_running_var of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.2.conv1.norm.weight loaded from res3_2_branch2a_bn_gamma of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.2.conv1.weight loaded from res3_2_branch2a_w of shape (128, 512, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.2.conv2.norm.bias loaded from res3_2_branch2b_bn_beta of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.2.conv2.norm.running_mean loaded from res3_2_branch2b_bn_running_mean of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.2.conv2.norm.running_var loaded from res3_2_branch2b_bn_running_var of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.2.conv2.norm.weight loaded from res3_2_branch2b_bn_gamma of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.2.conv2.weight loaded from res3_2_branch2b_w of shape (128, 128, 3, 3)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.2.conv3.norm.bias loaded from res3_2_branch2c_bn_beta of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.2.conv3.norm.running_mean loaded from res3_2_branch2c_bn_running_mean of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.2.conv3.norm.running_var loaded from res3_2_branch2c_bn_running_var of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.2.conv3.norm.weight loaded from res3_2_branch2c_bn_gamma of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.2.conv3.weight loaded from res3_2_branch2c_w of shape (512, 128, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.3.conv1.norm.bias loaded from res3_3_branch2a_bn_beta of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.3.conv1.norm.running_mean loaded from res3_3_branch2a_bn_running_mean of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.3.conv1.norm.running_var loaded from res3_3_branch2a_bn_running_var of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.3.conv1.norm.weight loaded from res3_3_branch2a_bn_gamma of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.3.conv1.weight loaded from res3_3_branch2a_w of shape (128, 512, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.3.conv2.norm.bias loaded from res3_3_branch2b_bn_beta of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.3.conv2.norm.running_mean loaded from res3_3_branch2b_bn_running_mean of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.3.conv2.norm.running_var loaded from res3_3_branch2b_bn_running_var of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.3.conv2.norm.weight loaded from res3_3_branch2b_bn_gamma of shape (128,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.3.conv2.weight loaded from res3_3_branch2b_w of shape (128, 128, 3, 3)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.3.conv3.norm.bias loaded from res3_3_branch2c_bn_beta of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.3.conv3.norm.running_mean loaded from res3_3_branch2c_bn_running_mean of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.3.conv3.norm.running_var loaded from res3_3_branch2c_bn_running_var of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.3.conv3.norm.weight loaded from res3_3_branch2c_bn_gamma of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res3.3.conv3.weight loaded from res3_3_branch2c_w of shape (512, 128, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.conv1.norm.bias loaded from res4_0_branch2a_bn_beta of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.conv1.norm.running_mean loaded from res4_0_branch2a_bn_running_mean of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.conv1.norm.running_var loaded from res4_0_branch2a_bn_running_var of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.conv1.norm.weight loaded from res4_0_branch2a_bn_gamma of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.conv1.weight loaded from res4_0_branch2a_w of shape (256, 512, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.conv2.norm.bias loaded from res4_0_branch2b_bn_beta of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.conv2.norm.running_mean loaded from res4_0_branch2b_bn_running_mean of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.conv2.norm.running_var loaded from res4_0_branch2b_bn_running_var of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.conv2.norm.weight loaded from res4_0_branch2b_bn_gamma of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.conv2.weight loaded from res4_0_branch2b_w of shape (256, 256, 3, 3)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.conv3.norm.bias loaded from res4_0_branch2c_bn_beta of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.conv3.norm.running_mean loaded from res4_0_branch2c_bn_running_mean of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.conv3.norm.running_var loaded from res4_0_branch2c_bn_running_var of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.conv3.norm.weight loaded from res4_0_branch2c_bn_gamma of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.conv3.weight loaded from res4_0_branch2c_w of shape (1024, 256, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.shortcut.norm.bias loaded from res4_0_branch1_bn_beta of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.shortcut.norm.running_mean loaded from res4_0_branch1_bn_running_mean of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.shortcut.norm.running_var loaded from res4_0_branch1_bn_running_var of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.shortcut.norm.weight loaded from res4_0_branch1_bn_gamma of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.0.shortcut.weight loaded from res4_0_branch1_w of shape (1024, 512, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.1.conv1.norm.bias loaded from res4_1_branch2a_bn_beta of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.1.conv1.norm.running_mean loaded from res4_1_branch2a_bn_running_mean of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.1.conv1.norm.running_var loaded from res4_1_branch2a_bn_running_var of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.1.conv1.norm.weight loaded from res4_1_branch2a_bn_gamma of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.1.conv1.weight loaded from res4_1_branch2a_w of shape (256, 1024, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.1.conv2.norm.bias loaded from res4_1_branch2b_bn_beta of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.1.conv2.norm.running_mean loaded from res4_1_branch2b_bn_running_mean of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.1.conv2.norm.running_var loaded from res4_1_branch2b_bn_running_var of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.1.conv2.norm.weight loaded from res4_1_branch2b_bn_gamma of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.1.conv2.weight loaded from res4_1_branch2b_w of shape (256, 256, 3, 3)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.1.conv3.norm.bias loaded from res4_1_branch2c_bn_beta of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.1.conv3.norm.running_mean loaded from res4_1_branch2c_bn_running_mean of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.1.conv3.norm.running_var loaded from res4_1_branch2c_bn_running_var of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.1.conv3.norm.weight loaded from res4_1_branch2c_bn_gamma of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.1.conv3.weight loaded from res4_1_branch2c_w of shape (1024, 256, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.2.conv1.norm.bias loaded from res4_2_branch2a_bn_beta of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.2.conv1.norm.running_mean loaded from res4_2_branch2a_bn_running_mean of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.2.conv1.norm.running_var loaded from res4_2_branch2a_bn_running_var of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.2.conv1.norm.weight loaded from res4_2_branch2a_bn_gamma of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.2.conv1.weight loaded from res4_2_branch2a_w of shape (256, 1024, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.2.conv2.norm.bias loaded from res4_2_branch2b_bn_beta of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.2.conv2.norm.running_mean loaded from res4_2_branch2b_bn_running_mean of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.2.conv2.norm.running_var loaded from res4_2_branch2b_bn_running_var of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.2.conv2.norm.weight loaded from res4_2_branch2b_bn_gamma of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.2.conv2.weight loaded from res4_2_branch2b_w of shape (256, 256, 3, 3)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.2.conv3.norm.bias loaded from res4_2_branch2c_bn_beta of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.2.conv3.norm.running_mean loaded from res4_2_branch2c_bn_running_mean of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.2.conv3.norm.running_var loaded from res4_2_branch2c_bn_running_var of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.2.conv3.norm.weight loaded from res4_2_branch2c_bn_gamma of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.2.conv3.weight loaded from res4_2_branch2c_w of shape (1024, 256, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.3.conv1.norm.bias loaded from res4_3_branch2a_bn_beta of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.3.conv1.norm.running_mean loaded from res4_3_branch2a_bn_running_mean of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.3.conv1.norm.running_var loaded from res4_3_branch2a_bn_running_var of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.3.conv1.norm.weight loaded from res4_3_branch2a_bn_gamma of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.3.conv1.weight loaded from res4_3_branch2a_w of shape (256, 1024, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.3.conv2.norm.bias loaded from res4_3_branch2b_bn_beta of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.3.conv2.norm.running_mean loaded from res4_3_branch2b_bn_running_mean of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.3.conv2.norm.running_var loaded from res4_3_branch2b_bn_running_var of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.3.conv2.norm.weight loaded from res4_3_branch2b_bn_gamma of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.3.conv2.weight loaded from res4_3_branch2b_w of shape (256, 256, 3, 3)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.3.conv3.norm.bias loaded from res4_3_branch2c_bn_beta of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.3.conv3.norm.running_mean loaded from res4_3_branch2c_bn_running_mean of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.3.conv3.norm.running_var loaded from res4_3_branch2c_bn_running_var of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.3.conv3.norm.weight loaded from res4_3_branch2c_bn_gamma of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.3.conv3.weight loaded from res4_3_branch2c_w of shape (1024, 256, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.4.conv1.norm.bias loaded from res4_4_branch2a_bn_beta of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.4.conv1.norm.running_mean loaded from res4_4_branch2a_bn_running_mean of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.4.conv1.norm.running_var loaded from res4_4_branch2a_bn_running_var of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.4.conv1.norm.weight loaded from res4_4_branch2a_bn_gamma of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.4.conv1.weight loaded from res4_4_branch2a_w of shape (256, 1024, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.4.conv2.norm.bias loaded from res4_4_branch2b_bn_beta of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.4.conv2.norm.running_mean loaded from res4_4_branch2b_bn_running_mean of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.4.conv2.norm.running_var loaded from res4_4_branch2b_bn_running_var of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.4.conv2.norm.weight loaded from res4_4_branch2b_bn_gamma of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.4.conv2.weight loaded from res4_4_branch2b_w of shape (256, 256, 3, 3)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.4.conv3.norm.bias loaded from res4_4_branch2c_bn_beta of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.4.conv3.norm.running_mean loaded from res4_4_branch2c_bn_running_mean of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.4.conv3.norm.running_var loaded from res4_4_branch2c_bn_running_var of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.4.conv3.norm.weight loaded from res4_4_branch2c_bn_gamma of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.4.conv3.weight loaded from res4_4_branch2c_w of shape (1024, 256, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.5.conv1.norm.bias loaded from res4_5_branch2a_bn_beta of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.5.conv1.norm.running_mean loaded from res4_5_branch2a_bn_running_mean of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.5.conv1.norm.running_var loaded from res4_5_branch2a_bn_running_var of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.5.conv1.norm.weight loaded from res4_5_branch2a_bn_gamma of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.5.conv1.weight loaded from res4_5_branch2a_w of shape (256, 1024, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.5.conv2.norm.bias loaded from res4_5_branch2b_bn_beta of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.5.conv2.norm.running_mean loaded from res4_5_branch2b_bn_running_mean of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.5.conv2.norm.running_var loaded from res4_5_branch2b_bn_running_var of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.5.conv2.norm.weight loaded from res4_5_branch2b_bn_gamma of shape (256,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.5.conv2.weight loaded from res4_5_branch2b_w of shape (256, 256, 3, 3)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.5.conv3.norm.bias loaded from res4_5_branch2c_bn_beta of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.5.conv3.norm.running_mean loaded from res4_5_branch2c_bn_running_mean of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.5.conv3.norm.running_var loaded from res4_5_branch2c_bn_running_var of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.5.conv3.norm.weight loaded from res4_5_branch2c_bn_gamma of shape (1024,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res4.5.conv3.weight loaded from res4_5_branch2c_w of shape (1024, 256, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.conv1.norm.bias loaded from res5_0_branch2a_bn_beta of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.conv1.norm.running_mean loaded from res5_0_branch2a_bn_running_mean of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.conv1.norm.running_var loaded from res5_0_branch2a_bn_running_var of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.conv1.norm.weight loaded from res5_0_branch2a_bn_gamma of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.conv1.weight loaded from res5_0_branch2a_w of shape (512, 1024, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.conv2.norm.bias loaded from res5_0_branch2b_bn_beta of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.conv2.norm.running_mean loaded from res5_0_branch2b_bn_running_mean of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.conv2.norm.running_var loaded from res5_0_branch2b_bn_running_var of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.conv2.norm.weight loaded from res5_0_branch2b_bn_gamma of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.conv2.weight loaded from res5_0_branch2b_w of shape (512, 512, 3, 3)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.conv3.norm.bias loaded from res5_0_branch2c_bn_beta of shape (2048,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.conv3.norm.running_mean loaded from res5_0_branch2c_bn_running_mean of shape (2048,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.conv3.norm.running_var loaded from res5_0_branch2c_bn_running_var of shape (2048,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.conv3.norm.weight loaded from res5_0_branch2c_bn_gamma of shape (2048,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.conv3.weight loaded from res5_0_branch2c_w of shape (2048, 512, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.shortcut.norm.bias loaded from res5_0_branch1_bn_beta of shape (2048,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.shortcut.norm.running_mean loaded from res5_0_branch1_bn_running_mean of shape (2048,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.shortcut.norm.running_var loaded from res5_0_branch1_bn_running_var of shape (2048,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.shortcut.norm.weight loaded from res5_0_branch1_bn_gamma of shape (2048,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.0.shortcut.weight loaded from res5_0_branch1_w of shape (2048, 1024, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.1.conv1.norm.bias loaded from res5_1_branch2a_bn_beta of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.1.conv1.norm.running_mean loaded from res5_1_branch2a_bn_running_mean of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.1.conv1.norm.running_var loaded from res5_1_branch2a_bn_running_var of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.1.conv1.norm.weight loaded from res5_1_branch2a_bn_gamma of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.1.conv1.weight loaded from res5_1_branch2a_w of shape (512, 2048, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.1.conv2.norm.bias loaded from res5_1_branch2b_bn_beta of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.1.conv2.norm.running_mean loaded from res5_1_branch2b_bn_running_mean of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.1.conv2.norm.running_var loaded from res5_1_branch2b_bn_running_var of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.1.conv2.norm.weight loaded from res5_1_branch2b_bn_gamma of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.1.conv2.weight loaded from res5_1_branch2b_w of shape (512, 512, 3, 3)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.1.conv3.norm.bias loaded from res5_1_branch2c_bn_beta of shape (2048,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.1.conv3.norm.running_mean loaded from res5_1_branch2c_bn_running_mean of shape (2048,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.1.conv3.norm.running_var loaded from res5_1_branch2c_bn_running_var of shape (2048,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.1.conv3.norm.weight loaded from res5_1_branch2c_bn_gamma of shape (2048,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.1.conv3.weight loaded from res5_1_branch2c_w of shape (2048, 512, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.2.conv1.norm.bias loaded from res5_2_branch2a_bn_beta of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.2.conv1.norm.running_mean loaded from res5_2_branch2a_bn_running_mean of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.2.conv1.norm.running_var loaded from res5_2_branch2a_bn_running_var of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.2.conv1.norm.weight loaded from res5_2_branch2a_bn_gamma of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.2.conv1.weight loaded from res5_2_branch2a_w of shape (512, 2048, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.2.conv2.norm.bias loaded from res5_2_branch2b_bn_beta of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.2.conv2.norm.running_mean loaded from res5_2_branch2b_bn_running_mean of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.2.conv2.norm.running_var loaded from res5_2_branch2b_bn_running_var of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.2.conv2.norm.weight loaded from res5_2_branch2b_bn_gamma of shape (512,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.2.conv2.weight loaded from res5_2_branch2b_w of shape (512, 512, 3, 3)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.2.conv3.norm.bias loaded from res5_2_branch2c_bn_beta of shape (2048,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.2.conv3.norm.running_mean loaded from res5_2_branch2c_bn_running_mean of shape (2048,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.2.conv3.norm.running_var loaded from res5_2_branch2c_bn_running_var of shape (2048,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.2.conv3.norm.weight loaded from res5_2_branch2c_bn_gamma of shape (2048,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.res5.2.conv3.weight loaded from res5_2_branch2c_w of shape (2048, 512, 1, 1)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.stem.conv1.norm.bias loaded from res_conv1_bn_beta of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.stem.conv1.norm.running_mean loaded from res_conv1_bn_running_mean of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.stem.conv1.norm.running_var loaded from res_conv1_bn_running_var of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.stem.conv1.norm.weight loaded from res_conv1_bn_gamma of shape (64,)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mbackbone.bottom_up.stem.conv1.weight loaded from conv1_w of shape (64, 3, 7, 7)
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mSome model parameters or buffers are not found in the checkpoint:
[34mbackbone.fpn_lateral2.{bias, weight}[0m
[34mbackbone.fpn_lateral3.{bias, weight}[0m
[34mbackbone.fpn_lateral4.{bias, weight}[0m
[34mbackbone.fpn_lateral5.{bias, weight}[0m
[34mbackbone.fpn_output2.{bias, weight}[0m
[34mbackbone.fpn_output3.{bias, weight}[0m
[34mbackbone.fpn_output4.{bias, weight}[0m
[34mbackbone.fpn_output5.{bias, weight}[0m
[34mpixel_mean[0m
[34mpixel_std[0m
[34mproposal_generator.anchor_generator.cell_anchors.{0, 1, 2, 3, 4}[0m
[34mproposal_generator.rpn_head.anchor_deltas.{bias, weight}[0m
[34mproposal_generator.rpn_head.conv.{bias, weight}[0m
[34mproposal_generator.rpn_head.objectness_logits.{bias, weight}[0m
[34mroi_heads.box_head.fc1.{bias, weight}[0m
[34mroi_heads.box_head.fc2.{bias, weight}[0m
[34mroi_heads.box_predictor.bbox_pred.{bias, weight}[0m
[34mroi_heads.box_predictor.cls_score.{bias, weight}[0m
[32m[11/26 15:46:24 d2.checkpoint.c2_model_loading]: [0mThe checkpoint state_dict contains keys that are not used by the model:
[35mfc1000_b[0m
[35mfc1000_w[0m
[35mconv1_b[0m
[32m[11/26 15:46:25 d2.data.datasets.coco]: [0mLoaded 2501 images in COCO format from protocol/custom_protocols/custom_voc_2007_train.json
[32m[11/26 15:46:26 d2.data.datasets.coco]: [0mLoaded 2510 images in COCO format from protocol/custom_protocols/custom_voc_2007_val.json
[32m[11/26 15:46:26 d2.data.datasets.coco]: [0mLoaded 5717 images in COCO format from protocol/custom_protocols/custom_voc_2012_train.json
[32m[11/26 15:46:26 d2.data.datasets.coco]: [0mLoaded 5823 images in COCO format from protocol/custom_protocols/custom_voc_2012_val.json
[32m[11/26 15:46:26 d2.data.build]: [0mRemoved 0 images with no usable annotations. 16551 images left.
[32m[11/26 15:46:26 d2.data.build]: [0mDistribution of instances among all 20 categories:
[36m| category | #instances | category | #instances | category | #instances |
|:-----------:|:-------------|:-----------:|:-------------|:----------:|:-------------|
| aeroplane | 1285 | bicycle | 1208 | bird | 1820 |
| boat | 1397 | bottle | 2116 | bus | 909 |
| car | 4008 | cat | 1616 | chair | 4338 |
| cow | 1058 | diningtable | 1057 | dog | 2079 |
| horse | 1156 | motorbike | 1141 | person | 15576 |
| pottedplant | 1724 | sheep | 1347 | sofa | 1211 |
| train | 984 | tvmonitor | 1193 | | |
| total | 47223 | | | | |[0m
[32m[11/26 15:46:26 d2.data.dataset_mapper]: [0m[DatasetMapper] Augmentations used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()]
[32m[11/26 15:46:26 d2.data.build]: [0mUsing training sampler TrainingSampler
[32m[11/26 15:46:26 d2.data.common]: [0mSerializing 16551 elements to byte tensors and concatenating them all ...
[32m[11/26 15:46:27 d2.data.common]: [0mSerialized dataset takes 6.17 MiB
[32m[11/26 15:46:27 detectron2]: [0mStarting training from iteration 0
[32m[11/26 15:46:45 d2.utils.events]: [0m iter: 20 total_loss: 0.7846 loss_box_reg: 0.02849 loss_cls: 0.2678 loss_rpn_cls: 0.5485 loss_rpn_loc: 0.04709 lr: 0.0019081 max_mem: 5120M
[32m[11/26 22:08:10 d2.utils.events]: [0m eta: 0:00:11 iter: 35980 total_loss: 0.1699 loss_box_reg: 0.1006 loss_cls: 0.05196 loss_rpn_cls: 0.002539 loss_rpn_loc: 0.01681 lr: 0.0001 max_mem: 5120M
[32m[11/26 22:08:22 fvcore.common.checkpoint]: [0mSaving checkpoint to /ssd_scratch/cvit/dksingh/overlooked_elephant/fasterrcnn/fasterrcnn_36k_iters/model_final.pth
[32m[11/26 22:08:24 fvcore.common.checkpoint]: [0mSaving checkpoint to /ssd_scratch/cvit/dksingh/overlooked_elephant/fasterrcnn/fasterrcnn_36k_iters/model_final.pth
/home/dksingh/inseg/detectron2/detectron2/modeling/roi_heads/fast_rcnn.py:217: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1595629427478/work/torch/csrc/utils/python_arg_parser.cpp:766.)
num_fg = fg_inds.nonzero().numel()
/home/dksingh/inseg/detectron2/detectron2/modeling/roi_heads/fast_rcnn.py:217: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1595629427478/work/torch/csrc/utils/python_arg_parser.cpp:766.)
num_fg = fg_inds.nonzero().numel()
/home/dksingh/inseg/detectron2/detectron2/modeling/roi_heads/fast_rcnn.py:217: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1595629427478/work/torch/csrc/utils/python_arg_parser.cpp:766.)
num_fg = fg_inds.nonzero().numel()
[5m[31mWARNING[0m [32m[11/26 22:08:26 d2.data.datasets.coco]: [0m
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.
[32m[11/26 22:08:26 d2.data.datasets.coco]: [0mLoaded 4952 images in COCO format from protocol/custom_protocols/custom_voc_2007_test.json
[32m[11/26 22:08:26 d2.data.build]: [0mDistribution of instances among all 21 categories:
[36m| category | #instances | category | #instances | category | #instances |
|:----------:|:-------------|:-----------:|:-------------|:-----------:|:-------------|
| unknown | 0 | aeroplane | 311 | bicycle | 389 |
| bird | 576 | boat | 393 | bottle | 657 |
| bus | 254 | car | 1541 | cat | 370 |
| chair | 1374 | cow | 329 | diningtable | 299 |
| dog | 530 | horse | 395 | motorbike | 369 |
| person | 5227 | pottedplant | 592 | sheep | 311 |
| sofa | 396 | train | 302 | tvmonitor | 361 |
| | | | | | |
| total | 14976 | | | | |[0m
[32m[11/26 22:08:26 d2.data.dataset_mapper]: [0m[DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[32m[11/26 22:08:26 d2.data.common]: [0mSerializing 4952 elements to byte tensors and concatenating them all ...
[32m[11/26 22:08:26 d2.data.common]: [0mSerialized dataset takes 1.87 MiB
[32m[11/26 22:08:26 d2.evaluation.evaluator]: [0mStart inference on 1238 images
/home/dksingh/inseg/detectron2/detectron2/modeling/roi_heads/fast_rcnn.py:217: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1595629427478/work/torch/csrc/utils/python_arg_parser.cpp:766.)
num_fg = fg_inds.nonzero().numel()
[32m[11/26 22:08:33 d2.evaluation.evaluator]: [0mInference done 11/1238. 0.0654 s / img. ETA=0:01:22
[32m[11/26 22:08:38 d2.evaluation.evaluator]: [0mInference done 86/1238. 0.0650 s / img. ETA=0:01:17
[32m[11/26 22:08:43 d2.evaluation.evaluator]: [0mInference done 162/1238. 0.0644 s / img. ETA=0:01:11
[32m[11/26 22:08:48 d2.evaluation.evaluator]: [0mInference done 238/1238. 0.0642 s / img. ETA=0:01:06
[32m[11/26 22:08:53 d2.evaluation.evaluator]: [0mInference done 315/1238. 0.0640 s / img. ETA=0:01:01
[32m[11/26 22:08:58 d2.evaluation.evaluator]: [0mInference done 390/1238. 0.0641 s / img. ETA=0:00:56
[32m[11/26 22:09:03 d2.evaluation.evaluator]: [0mInference done 466/1238. 0.0641 s / img. ETA=0:00:51
[32m[11/26 22:09:08 d2.evaluation.evaluator]: [0mInference done 543/1238. 0.0640 s / img. ETA=0:00:46
[32m[11/26 22:09:13 d2.evaluation.evaluator]: [0mInference done 619/1238. 0.0640 s / img. ETA=0:00:40
[32m[11/26 22:09:18 d2.evaluation.evaluator]: [0mInference done 695/1238. 0.0640 s / img. ETA=0:00:35
[32m[11/26 22:09:23 d2.evaluation.evaluator]: [0mInference done 771/1238. 0.0640 s / img. ETA=0:00:30
[32m[11/26 22:09:28 d2.evaluation.evaluator]: [0mInference done 847/1238. 0.0640 s / img. ETA=0:00:25
[32m[11/26 22:09:33 d2.evaluation.evaluator]: [0mInference done 923/1238. 0.0640 s / img. ETA=0:00:20
[32m[11/26 22:09:38 d2.evaluation.evaluator]: [0mInference done 998/1238. 0.0641 s / img. ETA=0:00:15
[32m[11/26 22:09:43 d2.evaluation.evaluator]: [0mInference done 1074/1238. 0.0641 s / img. ETA=0:00:10
[32m[11/26 22:09:48 d2.evaluation.evaluator]: [0mInference done 1150/1238. 0.0641 s / img. ETA=0:00:05
[32m[11/26 22:09:53 d2.evaluation.evaluator]: [0mInference done 1226/1238. 0.0641 s / img. ETA=0:00:00
[32m[11/26 22:09:54 d2.evaluation.evaluator]: [0mTotal inference time: 0:01:22.036135 (0.066534 s / img per device, on 4 devices)
[32m[11/26 22:09:54 d2.evaluation.evaluator]: [0mTotal inference pure compute time: 0:01:19 (0.064101 s / img per device, on 4 devices)
[32m[11/26 22:10:05 detectron2]: [0mImage level evaluation complete for custom_voc_2007_test
[32m[11/26 22:10:05 detectron2]: [0mResults for custom_voc_2007_test
Traceback (most recent call last):
File "main.py", line 199, in <module>
args=(args,),
File "/home/dksingh/inseg/detectron2/detectron2/engine/launch.py", line 59, in launch
daemon=False,
File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 200, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
while not context.join():
File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 119, in join
raise Exception(msg)
Exception:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/home/dksingh/inseg/detectron2/detectron2/engine/launch.py", line 94, in _distributed_worker
main_func(*args)
File "/home/dksingh/paper_impl2/Elephant-of-object-detection/main.py", line 187, in main
return do_test(cfg, model)
File "/home/dksingh/paper_impl2/Elephant-of-object-detection/main.py", line 63, in do_test
evaluator._coco_api.cats)
File "/home/dksingh/paper_impl2/Elephant-of-object-detection/WIC.py", line 45, in only_mAP_analysis
scores = torch.cat(scores)
RuntimeError: All input tensors must be on the same device. Received cuda:0 and cuda:3
So I performed the evaluation on a single GPU using the trained model, using the following command:
python main.py --num-gpus 1 --config-file training_configs/faster_rcnn_R_50_FPN.yaml --resume --eval-only
I think it produces 73.87 mAP on PASCAL test, and 67.32 mAP on WR1. Test log:
Command Line Args: Namespace(config_file='training_configs/faster_rcnn_R_50_FPN.yaml', dist_url='tcp://127.0.0.1:50712', eval_only=True, machine_rank=0, num_gpus=1, num_machines=1, opts=[], resume=True)
[32m[11/26 22:12:53 detectron2]: [0mRank of current process: 0. World size: 1
[32m[11/26 22:12:58 detectron2]: [0mEnvironment info:
---------------------- -------------------------------------------------------------------------------
sys.platform linux
Python 3.7.3 | packaged by conda-forge | (default, Jul 1 2019, 21:52:21) [GCC 7.3.0]
numpy 1.16.4
detectron2 0.3 @/home/dksingh/inseg/detectron2/detectron2
Compiler GCC 5.5
CUDA compiler CUDA 10.2
detectron2 arch flags 6.1
DETECTRON2_ENV_MODULE <not set>
PyTorch 1.6.0 @/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch
PyTorch debug build False
GPU available True
GPU 0,1,2,3 GeForce GTX 1080 Ti (arch=6.1)
CUDA_HOME /usr/local/cuda
Pillow 7.1.2
torchvision 0.7.0 @/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torchvision
torchvision arch flags 3.5, 5.0, 6.0, 7.0, 7.5
fvcore 0.1.2.post20201103
cv2 4.1.0
---------------------- -------------------------------------------------------------------------------
PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2019.0.4 Product Build 20190411 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 10.2
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
- CuDNN 7.6.5
- Magma 2.5.2
- Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,
[32m[11/26 22:12:58 detectron2]: [0mCommand line arguments: Namespace(config_file='training_configs/faster_rcnn_R_50_FPN.yaml', dist_url='tcp://127.0.0.1:50712', eval_only=True, machine_rank=0, num_gpus=1, num_machines=1, opts=[], resume=True)
[32m[11/26 22:12:58 detectron2]: [0mContents of args.config_file=training_configs/faster_rcnn_R_50_FPN.yaml:
# Configuration for training with 4 gpus
_BASE_: "~/detectron2/detectron2/configs/Base-RCNN-FPN.yaml"
MODEL:
WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
MASK_ON: False
RESNETS:
DEPTH: 50
ROI_HEADS:
NUM_CLASSES: 20
DATASETS:
TRAIN: ('custom_voc_2007_train','custom_voc_2007_val','custom_voc_2012_train','custom_voc_2012_val',)
TEST: ('custom_voc_2007_test','WR1_Mixed_Unknowns')
# TEST: ('custom_voc_2007_test','Mixed_Unknowns')
SOLVER:
BASE_LR: 0.01
STEPS: (24000, 32000)
MAX_ITER: 36000
WARMUP_ITERS: 100
OUTPUT_DIR: /ssd_scratch/cvit/dksingh/overlooked_elephant/fasterrcnn/fasterrcnn_36k_iters/
[32m[11/26 22:12:58 detectron2]: [0mRunning with full config:
CUDNN_BENCHMARK: False
DATALOADER:
ASPECT_RATIO_GROUPING: True
FILTER_EMPTY_ANNOTATIONS: True
NUM_WORKERS: 4
REPEAT_THRESHOLD: 0.0
SAMPLER_TRAIN: TrainingSampler
DATASETS:
PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
PRECOMPUTED_PROPOSAL_TOPK_TRAIN: 2000
PROPOSAL_FILES_TEST: ()
PROPOSAL_FILES_TRAIN: ()
TEST: ('custom_voc_2007_test', 'WR1_Mixed_Unknowns')
TRAIN: ('custom_voc_2007_train', 'custom_voc_2007_val', 'custom_voc_2012_train', 'custom_voc_2012_val')
GLOBAL:
HACK: 1.0
INPUT:
CROP:
ENABLED: False
SIZE: [0.9, 0.9]
TYPE: relative_range
FORMAT: BGR
MASK_FORMAT: polygon
MAX_SIZE_TEST: 1333
MAX_SIZE_TRAIN: 1333
MIN_SIZE_TEST: 800
MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)
MIN_SIZE_TRAIN_SAMPLING: choice
RANDOM_FLIP: horizontal
MODEL:
ANCHOR_GENERATOR:
ANGLES: [[-90, 0, 90]]
ASPECT_RATIOS: [[0.5, 1.0, 2.0]]
NAME: DefaultAnchorGenerator
OFFSET: 0.0
SIZES: [[32], [64], [128], [256], [512]]
BACKBONE:
FREEZE_AT: 2
NAME: build_resnet_fpn_backbone
DEVICE: cuda
FPN:
FUSE_TYPE: sum
IN_FEATURES: ['res2', 'res3', 'res4', 'res5']
NORM:
OUT_CHANNELS: 256
KEYPOINT_ON: False
LOAD_PROPOSALS: False
MASK_ON: False
META_ARCHITECTURE: GeneralizedRCNN
PANOPTIC_FPN:
COMBINE:
ENABLED: True
INSTANCES_CONFIDENCE_THRESH: 0.5
OVERLAP_THRESH: 0.5
STUFF_AREA_LIMIT: 4096
INSTANCE_LOSS_WEIGHT: 1.0
PIXEL_MEAN: [103.53, 116.28, 123.675]
PIXEL_STD: [1.0, 1.0, 1.0]
PROPOSAL_GENERATOR:
MIN_SIZE: 0
NAME: RPN
RESNETS:
DEFORM_MODULATED: False
DEFORM_NUM_GROUPS: 1
DEFORM_ON_PER_STAGE: [False, False, False, False]
DEPTH: 50
NORM: FrozenBN
NUM_GROUPS: 1
OUT_FEATURES: ['res2', 'res3', 'res4', 'res5']
RES2_OUT_CHANNELS: 256
RES5_DILATION: 1
STEM_OUT_CHANNELS: 64
STRIDE_IN_1X1: True
WIDTH_PER_GROUP: 64
RETINANET:
BBOX_REG_LOSS_TYPE: smooth_l1
BBOX_REG_WEIGHTS: (1.0, 1.0, 1.0, 1.0)
FOCAL_LOSS_ALPHA: 0.25
FOCAL_LOSS_GAMMA: 2.0
IN_FEATURES: ['p3', 'p4', 'p5', 'p6', 'p7']
IOU_LABELS: [0, -1, 1]
IOU_THRESHOLDS: [0.4, 0.5]
NMS_THRESH_TEST: 0.5
NORM:
NUM_CLASSES: 80
NUM_CONVS: 4
PRIOR_PROB: 0.01
SCORE_THRESH_TEST: 0.05
SMOOTH_L1_LOSS_BETA: 0.1
TOPK_CANDIDATES_TEST: 1000
ROI_BOX_CASCADE_HEAD:
BBOX_REG_WEIGHTS: ((10.0, 10.0, 5.0, 5.0), (20.0, 20.0, 10.0, 10.0), (30.0, 30.0, 15.0, 15.0))
IOUS: (0.5, 0.6, 0.7)
ROI_BOX_HEAD:
BBOX_REG_LOSS_TYPE: smooth_l1
BBOX_REG_LOSS_WEIGHT: 1.0
BBOX_REG_WEIGHTS: (10.0, 10.0, 5.0, 5.0)
CLS_AGNOSTIC_BBOX_REG: False
CONV_DIM: 256
FC_DIM: 1024
NAME: FastRCNNConvFCHead
NORM:
NUM_CONV: 0
NUM_FC: 2
POOLER_RESOLUTION: 7
POOLER_SAMPLING_RATIO: 0
POOLER_TYPE: ROIAlignV2
SMOOTH_L1_BETA: 0.0
TRAIN_ON_PRED_BOXES: False
ROI_HEADS:
BATCH_SIZE_PER_IMAGE: 512
IN_FEATURES: ['p2', 'p3', 'p4', 'p5']
IOU_LABELS: [0, 1]
IOU_THRESHOLDS: [0.5]
NAME: StandardROIHeads
NMS_THRESH_TEST: 0.5
NUM_CLASSES: 20
POSITIVE_FRACTION: 0.25
PROPOSAL_APPEND_GT: True
SCORE_THRESH_TEST: 0.05
ROI_KEYPOINT_HEAD:
CONV_DIMS: (512, 512, 512, 512, 512, 512, 512, 512)
LOSS_WEIGHT: 1.0
MIN_KEYPOINTS_PER_IMAGE: 1
NAME: KRCNNConvDeconvUpsampleHead
NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS: True
NUM_KEYPOINTS: 17
POOLER_RESOLUTION: 14
POOLER_SAMPLING_RATIO: 0
POOLER_TYPE: ROIAlignV2
ROI_MASK_HEAD:
CLS_AGNOSTIC_MASK: False
CONV_DIM: 256
NAME: MaskRCNNConvUpsampleHead
NORM:
NUM_CONV: 4
POOLER_RESOLUTION: 14
POOLER_SAMPLING_RATIO: 0
POOLER_TYPE: ROIAlignV2
RPN:
BATCH_SIZE_PER_IMAGE: 256
BBOX_REG_LOSS_TYPE: smooth_l1
BBOX_REG_LOSS_WEIGHT: 1.0
BBOX_REG_WEIGHTS: (1.0, 1.0, 1.0, 1.0)
BOUNDARY_THRESH: -1
HEAD_NAME: StandardRPNHead
IN_FEATURES: ['p2', 'p3', 'p4', 'p5', 'p6']
IOU_LABELS: [0, -1, 1]
IOU_THRESHOLDS: [0.3, 0.7]
LOSS_WEIGHT: 1.0
NMS_THRESH: 0.7
POSITIVE_FRACTION: 0.5
POST_NMS_TOPK_TEST: 1000
POST_NMS_TOPK_TRAIN: 1000
PRE_NMS_TOPK_TEST: 1000
PRE_NMS_TOPK_TRAIN: 2000
SMOOTH_L1_BETA: 0.0
SEM_SEG_HEAD:
COMMON_STRIDE: 4
CONVS_DIM: 128
IGNORE_VALUE: 255
IN_FEATURES: ['p2', 'p3', 'p4', 'p5']
LOSS_WEIGHT: 1.0
NAME: SemSegFPNHead
NORM: GN
NUM_CLASSES: 54
WEIGHTS: detectron2://ImageNetPretrained/MSRA/R-50.pkl
OUTPUT_DIR: /ssd_scratch/cvit/dksingh/overlooked_elephant/fasterrcnn/fasterrcnn_36k_iters/
SEED: -1
SOLVER:
AMP:
ENABLED: False
BASE_LR: 0.01
BIAS_LR_FACTOR: 1.0
CHECKPOINT_PERIOD: 5000
CLIP_GRADIENTS:
CLIP_TYPE: value
CLIP_VALUE: 1.0
ENABLED: False
NORM_TYPE: 2.0
GAMMA: 0.1
IMS_PER_BATCH: 16
LR_SCHEDULER_NAME: WarmupMultiStepLR
MAX_ITER: 36000
MOMENTUM: 0.9
NESTEROV: False
REFERENCE_WORLD_SIZE: 0
STEPS: (24000, 32000)
WARMUP_FACTOR: 0.001
WARMUP_ITERS: 100
WARMUP_METHOD: linear
WEIGHT_DECAY: 0.0001
WEIGHT_DECAY_BIAS: 0.0001
WEIGHT_DECAY_NORM: 0.0
TEST:
AUG:
ENABLED: False
FLIP: True
MAX_SIZE: 4000
MIN_SIZES: (400, 500, 600, 700, 800, 900, 1000, 1100, 1200)
DETECTIONS_PER_IMAGE: 100
EVAL_PERIOD: 0
EXPECTED_RESULTS: []
KEYPOINT_OKS_SIGMAS: []
PRECISE_BN:
ENABLED: False
NUM_ITER: 200
VERSION: 2
VIS_PERIOD: 0
[32m[11/26 22:12:58 detectron2]: [0mFull config saved to /ssd_scratch/cvit/dksingh/overlooked_elephant/fasterrcnn/fasterrcnn_36k_iters/config.yaml
[32m[11/26 22:12:58 d2.utils.env]: [0mUsing a generated random seed 58088618
[32m[11/26 22:13:07 detectron2]: [0mModel:
GeneralizedRCNN(
(backbone): FPN(
(fpn_lateral2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(top_block): LastLevelMaxPool()
(bottom_up): ResNet(
(stem): BasicStem(
(conv1): Conv2d(
3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
)
(res2): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv1): Conv2d(
64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
)
(res3): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv1): Conv2d(
256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(3): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
)
(res4): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
(conv1): Conv2d(
512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(3): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(4): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(5): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
)
(res5): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
(conv1): Conv2d(
1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
)
)
)
(proposal_generator): RPN(
(rpn_head): StandardRPNHead(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(objectness_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1))
(anchor_deltas): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1))
)
(anchor_generator): DefaultAnchorGenerator(
(cell_anchors): BufferList()
)
)
(roi_heads): StandardROIHeads(
(box_pooler): ROIPooler(
(level_poolers): ModuleList(
(0): ROIAlign(output_size=(7, 7), spatial_scale=0.25, sampling_ratio=0, aligned=True)
(1): ROIAlign(output_size=(7, 7), spatial_scale=0.125, sampling_ratio=0, aligned=True)
(2): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True)
(3): ROIAlign(output_size=(7, 7), spatial_scale=0.03125, sampling_ratio=0, aligned=True)
)
)
(box_head): FastRCNNConvFCHead(
(flatten): Flatten()
(fc1): Linear(in_features=12544, out_features=1024, bias=True)
(fc_relu1): ReLU()
(fc2): Linear(in_features=1024, out_features=1024, bias=True)
(fc_relu2): ReLU()
)
(box_predictor): FastRCNNOutputLayers(
(cls_score): Linear(in_features=1024, out_features=21, bias=True)
(bbox_pred): Linear(in_features=1024, out_features=80, bias=True)
)
)
)
[32m[11/26 22:13:07 fvcore.common.checkpoint]: [0mLoading checkpoint from /ssd_scratch/cvit/dksingh/overlooked_elephant/fasterrcnn/fasterrcnn_36k_iters/model_final.pth
[5m[31mWARNING[0m [32m[11/26 22:13:08 d2.data.datasets.coco]: [0m
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.
[32m[11/26 22:13:08 d2.data.datasets.coco]: [0mLoaded 4952 images in COCO format from protocol/custom_protocols/custom_voc_2007_test.json
[32m[11/26 22:13:08 d2.data.build]: [0mDistribution of instances among all 21 categories:
[36m| category | #instances | category | #instances | category | #instances |
|:----------:|:-------------|:-----------:|:-------------|:-----------:|:-------------|
| unknown | 0 | aeroplane | 311 | bicycle | 389 |
| bird | 576 | boat | 393 | bottle | 657 |
| bus | 254 | car | 1541 | cat | 370 |
| chair | 1374 | cow | 329 | diningtable | 299 |
| dog | 530 | horse | 395 | motorbike | 369 |
| person | 5227 | pottedplant | 592 | sheep | 311 |
| sofa | 396 | train | 302 | tvmonitor | 361 |
| | | | | | |
| total | 14976 | | | | |[0m
[32m[11/26 22:13:08 d2.data.dataset_mapper]: [0m[DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[32m[11/26 22:13:08 d2.data.common]: [0mSerializing 4952 elements to byte tensors and concatenating them all ...
[32m[11/26 22:13:08 d2.data.common]: [0mSerialized dataset takes 1.87 MiB
[32m[11/26 22:13:08 d2.evaluation.evaluator]: [0mStart inference on 4952 images
[32m[11/26 22:13:10 d2.evaluation.evaluator]: [0mInference done 11/4952. 0.0765 s / img. ETA=0:06:25
[32m[11/26 22:13:15 d2.evaluation.evaluator]: [0mInference done 76/4952. 0.0761 s / img. ETA=0:06:20
[32m[11/26 22:13:20 d2.evaluation.evaluator]: [0mInference done 141/4952. 0.0760 s / img. ETA=0:06:14
[32m[11/26 22:13:25 d2.evaluation.evaluator]: [0mInference done 206/4952. 0.0760 s / img. ETA=0:06:09
[32m[11/26 22:13:30 d2.evaluation.evaluator]: [0mInference done 271/4952. 0.0759 s / img. ETA=0:06:04
[32m[11/26 22:13:35 d2.evaluation.evaluator]: [0mInference done 336/4952. 0.0759 s / img. ETA=0:05:59
[32m[11/26 22:13:40 d2.evaluation.evaluator]: [0mInference done 400/4952. 0.0761 s / img. ETA=0:05:55
[32m[11/26 22:13:45 d2.evaluation.evaluator]: [0mInference done 464/4952. 0.0761 s / img. ETA=0:05:50
[32m[11/26 22:13:51 d2.evaluation.evaluator]: [0mInference done 529/4952. 0.0761 s / img. ETA=0:05:45
[32m[11/26 22:13:56 d2.evaluation.evaluator]: [0mInference done 593/4952. 0.0761 s / img. ETA=0:05:40
[32m[11/26 22:14:01 d2.evaluation.evaluator]: [0mInference done 658/4952. 0.0761 s / img. ETA=0:05:34
[32m[11/26 22:14:06 d2.evaluation.evaluator]: [0mInference done 722/4952. 0.0762 s / img. ETA=0:05:30
[32m[11/26 22:14:11 d2.evaluation.evaluator]: [0mInference done 786/4952. 0.0762 s / img. ETA=0:05:25
[32m[11/26 22:14:16 d2.evaluation.evaluator]: [0mInference done 850/4952. 0.0762 s / img. ETA=0:05:20
[32m[11/26 22:14:21 d2.evaluation.evaluator]: [0mInference done 914/4952. 0.0763 s / img. ETA=0:05:15
[32m[11/26 22:14:26 d2.evaluation.evaluator]: [0mInference done 978/4952. 0.0763 s / img. ETA=0:05:10
[32m[11/26 22:14:31 d2.evaluation.evaluator]: [0mInference done 1042/4952. 0.0764 s / img. ETA=0:05:06
[32m[11/26 22:14:36 d2.evaluation.evaluator]: [0mInference done 1106/4952. 0.0764 s / img. ETA=0:05:01
[32m[11/26 22:14:41 d2.evaluation.evaluator]: [0mInference done 1170/4952. 0.0764 s / img. ETA=0:04:56
[32m[11/26 22:14:46 d2.evaluation.evaluator]: [0mInference done 1234/4952. 0.0765 s / img. ETA=0:04:51
[32m[11/26 22:14:51 d2.evaluation.evaluator]: [0mInference done 1298/4952. 0.0765 s / img. ETA=0:04:46
[32m[11/26 22:14:56 d2.evaluation.evaluator]: [0mInference done 1362/4952. 0.0765 s / img. ETA=0:04:41
[32m[11/26 22:15:01 d2.evaluation.evaluator]: [0mInference done 1426/4952. 0.0765 s / img. ETA=0:04:36
[32m[11/26 22:15:06 d2.evaluation.evaluator]: [0mInference done 1490/4952. 0.0765 s / img. ETA=0:04:31
[32m[11/26 22:15:11 d2.evaluation.evaluator]: [0mInference done 1554/4952. 0.0766 s / img. ETA=0:04:26
[32m[11/26 22:15:16 d2.evaluation.evaluator]: [0mInference done 1617/4952. 0.0766 s / img. ETA=0:04:21
[32m[11/26 22:15:21 d2.evaluation.evaluator]: [0mInference done 1682/4952. 0.0766 s / img. ETA=0:04:16
[32m[11/26 22:15:26 d2.evaluation.evaluator]: [0mInference done 1746/4952. 0.0766 s / img. ETA=0:04:11
[32m[11/26 22:15:31 d2.evaluation.evaluator]: [0mInference done 1810/4952. 0.0766 s / img. ETA=0:04:06
[32m[11/26 22:15:36 d2.evaluation.evaluator]: [0mInference done 1875/4952. 0.0766 s / img. ETA=0:04:01
[32m[11/26 22:15:42 d2.evaluation.evaluator]: [0mInference done 1939/4952. 0.0766 s / img. ETA=0:03:56
[32m[11/26 22:15:47 d2.evaluation.evaluator]: [0mInference done 2003/4952. 0.0766 s / img. ETA=0:03:51
[32m[11/26 22:15:52 d2.evaluation.evaluator]: [0mInference done 2067/4952. 0.0766 s / img. ETA=0:03:46
[32m[11/26 22:15:57 d2.evaluation.evaluator]: [0mInference done 2130/4952. 0.0766 s / img. ETA=0:03:41
[32m[11/26 22:16:02 d2.evaluation.evaluator]: [0mInference done 2194/4952. 0.0767 s / img. ETA=0:03:36
[32m[11/26 22:16:07 d2.evaluation.evaluator]: [0mInference done 2258/4952. 0.0767 s / img. ETA=0:03:31
[32m[11/26 22:16:12 d2.evaluation.evaluator]: [0mInference done 2322/4952. 0.0766 s / img. ETA=0:03:26
[32m[11/26 22:16:17 d2.evaluation.evaluator]: [0mInference done 2386/4952. 0.0767 s / img. ETA=0:03:21
[32m[11/26 22:16:22 d2.evaluation.evaluator]: [0mInference done 2450/4952. 0.0767 s / img. ETA=0:03:16
[32m[11/26 22:16:27 d2.evaluation.evaluator]: [0mInference done 2514/4952. 0.0767 s / img. ETA=0:03:11
[32m[11/26 22:16:32 d2.evaluation.evaluator]: [0mInference done 2578/4952. 0.0767 s / img. ETA=0:03:06
[32m[11/26 22:16:37 d2.evaluation.evaluator]: [0mInference done 2642/4952. 0.0767 s / img. ETA=0:03:01
[32m[11/26 22:16:42 d2.evaluation.evaluator]: [0mInference done 2705/4952. 0.0767 s / img. ETA=0:02:56
[32m[11/26 22:16:47 d2.evaluation.evaluator]: [0mInference done 2768/4952. 0.0768 s / img. ETA=0:02:51
[32m[11/26 22:16:52 d2.evaluation.evaluator]: [0mInference done 2832/4952. 0.0768 s / img. ETA=0:02:46
[32m[11/26 22:16:57 d2.evaluation.evaluator]: [0mInference done 2897/4952. 0.0767 s / img. ETA=0:02:41
[32m[11/26 22:17:02 d2.evaluation.evaluator]: [0mInference done 2961/4952. 0.0767 s / img. ETA=0:02:36
[32m[11/26 22:17:07 d2.evaluation.evaluator]: [0mInference done 3025/4952. 0.0768 s / img. ETA=0:02:31
[32m[11/26 22:17:12 d2.evaluation.evaluator]: [0mInference done 3089/4952. 0.0768 s / img. ETA=0:02:26
[32m[11/26 22:17:17 d2.evaluation.evaluator]: [0mInference done 3153/4952. 0.0768 s / img. ETA=0:02:21
[32m[11/26 22:17:22 d2.evaluation.evaluator]: [0mInference done 3217/4952. 0.0768 s / img. ETA=0:02:16
[32m[11/26 22:17:27 d2.evaluation.evaluator]: [0mInference done 3281/4952. 0.0768 s / img. ETA=0:02:11
[32m[11/26 22:17:32 d2.evaluation.evaluator]: [0mInference done 3345/4952. 0.0768 s / img. ETA=0:02:06
[32m[11/26 22:17:38 d2.evaluation.evaluator]: [0mInference done 3409/4952. 0.0768 s / img. ETA=0:02:01
[32m[11/26 22:17:43 d2.evaluation.evaluator]: [0mInference done 3472/4952. 0.0768 s / img. ETA=0:01:56
[32m[11/26 22:17:48 d2.evaluation.evaluator]: [0mInference done 3535/4952. 0.0768 s / img. ETA=0:01:51
[32m[11/26 22:17:53 d2.evaluation.evaluator]: [0mInference done 3598/4952. 0.0768 s / img. ETA=0:01:46
[32m[11/26 22:17:58 d2.evaluation.evaluator]: [0mInference done 3662/4952. 0.0768 s / img. ETA=0:01:41
[32m[11/26 22:18:03 d2.evaluation.evaluator]: [0mInference done 3726/4952. 0.0768 s / img. ETA=0:01:36
[32m[11/26 22:18:08 d2.evaluation.evaluator]: [0mInference done 3790/4952. 0.0768 s / img. ETA=0:01:31
[32m[11/26 22:18:13 d2.evaluation.evaluator]: [0mInference done 3854/4952. 0.0768 s / img. ETA=0:01:26
[32m[11/26 22:18:18 d2.evaluation.evaluator]: [0mInference done 3918/4952. 0.0768 s / img. ETA=0:01:21
[32m[11/26 22:18:23 d2.evaluation.evaluator]: [0mInference done 3982/4952. 0.0768 s / img. ETA=0:01:16
[32m[11/26 22:18:28 d2.evaluation.evaluator]: [0mInference done 4045/4952. 0.0769 s / img. ETA=0:01:11
[32m[11/26 22:18:33 d2.evaluation.evaluator]: [0mInference done 4109/4952. 0.0768 s / img. ETA=0:01:06
[32m[11/26 22:18:38 d2.evaluation.evaluator]: [0mInference done 4174/4952. 0.0768 s / img. ETA=0:01:01
[32m[11/26 22:18:43 d2.evaluation.evaluator]: [0mInference done 4237/4952. 0.0768 s / img. ETA=0:00:56
[32m[11/26 22:18:48 d2.evaluation.evaluator]: [0mInference done 4300/4952. 0.0769 s / img. ETA=0:00:51
[32m[11/26 22:18:53 d2.evaluation.evaluator]: [0mInference done 4363/4952. 0.0769 s / img. ETA=0:00:46
[32m[11/26 22:18:58 d2.evaluation.evaluator]: [0mInference done 4427/4952. 0.0769 s / img. ETA=0:00:41
[32m[11/26 22:19:03 d2.evaluation.evaluator]: [0mInference done 4491/4952. 0.0769 s / img. ETA=0:00:36
[32m[11/26 22:19:08 d2.evaluation.evaluator]: [0mInference done 4555/4952. 0.0769 s / img. ETA=0:00:31
[32m[11/26 22:19:13 d2.evaluation.evaluator]: [0mInference done 4619/4952. 0.0769 s / img. ETA=0:00:26
[32m[11/26 22:19:18 d2.evaluation.evaluator]: [0mInference done 4683/4952. 0.0769 s / img. ETA=0:00:21
[32m[11/26 22:19:23 d2.evaluation.evaluator]: [0mInference done 4747/4952. 0.0769 s / img. ETA=0:00:16
[32m[11/26 22:19:28 d2.evaluation.evaluator]: [0mInference done 4811/4952. 0.0769 s / img. ETA=0:00:11
[32m[11/26 22:19:33 d2.evaluation.evaluator]: [0mInference done 4875/4952. 0.0769 s / img. ETA=0:00:06
[32m[11/26 22:19:38 d2.evaluation.evaluator]: [0mInference done 4939/4952. 0.0769 s / img. ETA=0:00:01
[32m[11/26 22:19:39 d2.evaluation.evaluator]: [0mTotal inference time: 0:06:29.762767 (0.078788 s / img per device, on 1 devices)
[32m[11/26 22:19:39 d2.evaluation.evaluator]: [0mTotal inference pure compute time: 0:06:20 (0.076857 s / img per device, on 1 devices)
[32m[11/26 22:19:39 detectron2]: [0mImage level evaluation complete for custom_voc_2007_test
[32m[11/26 22:19:39 detectron2]: [0mResults for custom_voc_2007_test
[32m[11/26 22:19:40 detectron2]: [0mAP for class no. 0: 0.8057809472084045
[32m[11/26 22:19:40 detectron2]: [0mAP for class no. 1: 0.7805687189102173
[32m[11/26 22:19:40 detectron2]: [0mAP for class no. 2: 0.6984018683433533
[32m[11/26 22:19:40 detectron2]: [0mAP for class no. 3: 0.6062182784080505
[32m[11/26 22:19:40 detectron2]: [0mAP for class no. 4: 0.5762754082679749
[32m[11/26 22:19:41 detectron2]: [0mAP for class no. 5: 0.7758022546768188
[32m[11/26 22:19:41 detectron2]: [0mAP for class no. 6: 0.7926622629165649
[32m[11/26 22:19:41 detectron2]: [0mAP for class no. 7: 0.8785473108291626
[32m[11/26 22:19:41 detectron2]: [0mAP for class no. 8: 0.5703489184379578
[32m[11/26 22:19:41 detectron2]: [0mAP for class no. 9: 0.7864148616790771
[32m[11/26 22:19:41 detectron2]: [0mAP for class no. 10: 0.6691756844520569
[32m[11/26 22:19:41 detectron2]: [0mAP for class no. 11: 0.8530338406562805
[32m[11/26 22:19:42 detectron2]: [0mAP for class no. 12: 0.8538536429405212
[32m[11/26 22:19:42 detectron2]: [0mAP for class no. 13: 0.8232013583183289
[32m[11/26 22:19:42 detectron2]: [0mAP for class no. 14: 0.786263108253479
[32m[11/26 22:19:42 detectron2]: [0mAP for class no. 15: 0.4942459166049957
[32m[11/26 22:19:42 detectron2]: [0mAP for class no. 16: 0.7428907155990601
[32m[11/26 22:19:42 detectron2]: [0mAP for class no. 17: 0.6694597601890564
[32m[11/26 22:19:42 detectron2]: [0mAP for class no. 18: 0.8503661751747131
[32m[11/26 22:19:43 detectron2]: [0mAP for class no. 19: 0.761128842830658
[32m[11/26 22:19:43 detectron2]: [0mmAP: 0.7387319207191467
[5m[31mWARNING[0m [32m[11/26 22:19:43 d2.data.datasets.coco]: [0m
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.
[32m[11/26 22:19:43 d2.data.datasets.coco]: [0mLoaded 4952 images in COCO format from protocol/custom_protocols/WR1_Mixed_Unknowns.json
[32m[11/26 22:19:43 d2.data.build]: [0mDistribution of instances among all 21 categories:
[36m| category | #instances | category | #instances | category | #instances |
|:----------:|:-------------|:-----------:|:-------------|:-----------:|:-------------|
| unknown | 15235 | aeroplane | 0 | bicycle | 0 |
| bird | 0 | boat | 0 | bottle | 0 |
| bus | 0 | car | 0 | cat | 0 |
| chair | 0 | cow | 0 | diningtable | 0 |
| dog | 0 | horse | 0 | motorbike | 0 |
| person | 0 | pottedplant | 0 | sheep | 0 |
| sofa | 0 | train | 0 | tvmonitor | 0 |
| | | | | | |
| total | 15235 | | | | |[0m
[32m[11/26 22:19:43 d2.data.dataset_mapper]: [0m[DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[32m[11/26 22:19:43 d2.data.common]: [0mSerializing 4952 elements to byte tensors and concatenating them all ...
[32m[11/26 22:19:43 d2.data.common]: [0mSerialized dataset takes 8.39 MiB
[32m[11/26 22:19:43 d2.evaluation.evaluator]: [0mStart inference on 4952 images
[32m[11/26 22:19:45 d2.evaluation.evaluator]: [0mInference done 11/4952. 0.0770 s / img. ETA=0:06:28
[32m[11/26 22:19:50 d2.evaluation.evaluator]: [0mInference done 74/4952. 0.0777 s / img. ETA=0:06:27
[32m[11/26 22:19:55 d2.evaluation.evaluator]: [0mInference done 138/4952. 0.0773 s / img. ETA=0:06:20
[32m[11/26 22:20:00 d2.evaluation.evaluator]: [0mInference done 202/4952. 0.0774 s / img. ETA=0:06:15
[32m[11/26 22:20:05 d2.evaluation.evaluator]: [0mInference done 265/4952. 0.0776 s / img. ETA=0:06:11
[32m[11/26 22:20:10 d2.evaluation.evaluator]: [0mInference done 328/4952. 0.0776 s / img. ETA=0:06:07
[32m[11/26 22:20:15 d2.evaluation.evaluator]: [0mInference done 391/4952. 0.0778 s / img. ETA=0:06:02
[32m[11/26 22:20:20 d2.evaluation.evaluator]: [0mInference done 455/4952. 0.0777 s / img. ETA=0:05:57
[32m[11/26 22:20:25 d2.evaluation.evaluator]: [0mInference done 518/4952. 0.0778 s / img. ETA=0:05:52
[32m[11/26 22:20:30 d2.evaluation.evaluator]: [0mInference done 582/4952. 0.0776 s / img. ETA=0:05:47
[32m[11/26 22:20:35 d2.evaluation.evaluator]: [0mInference done 645/4952. 0.0777 s / img. ETA=0:05:42
[32m[11/26 22:20:40 d2.evaluation.evaluator]: [0mInference done 710/4952. 0.0776 s / img. ETA=0:05:36
[32m[11/26 22:20:45 d2.evaluation.evaluator]: [0mInference done 774/4952. 0.0776 s / img. ETA=0:05:31
[32m[11/26 22:20:50 d2.evaluation.evaluator]: [0mInference done 838/4952. 0.0775 s / img. ETA=0:05:26
[32m[11/26 22:20:55 d2.evaluation.evaluator]: [0mInference done 902/4952. 0.0775 s / img. ETA=0:05:21
[32m[11/26 22:21:00 d2.evaluation.evaluator]: [0mInference done 966/4952. 0.0775 s / img. ETA=0:05:15
[32m[11/26 22:21:05 d2.evaluation.evaluator]: [0mInference done 1030/4952. 0.0774 s / img. ETA=0:05:10
[32m[11/26 22:21:11 d2.evaluation.evaluator]: [0mInference done 1094/4952. 0.0774 s / img. ETA=0:05:05
[32m[11/26 22:21:16 d2.evaluation.evaluator]: [0mInference done 1157/4952. 0.0774 s / img. ETA=0:05:00
[32m[11/26 22:21:21 d2.evaluation.evaluator]: [0mInference done 1220/4952. 0.0774 s / img. ETA=0:04:55
[32m[11/26 22:21:26 d2.evaluation.evaluator]: [0mInference done 1283/4952. 0.0775 s / img. ETA=0:04:50
[32m[11/26 22:21:31 d2.evaluation.evaluator]: [0mInference done 1346/4952. 0.0775 s / img. ETA=0:04:45
[32m[11/26 22:21:36 d2.evaluation.evaluator]: [0mInference done 1409/4952. 0.0775 s / img. ETA=0:04:41
[32m[11/26 22:21:41 d2.evaluation.evaluator]: [0mInference done 1473/4952. 0.0775 s / img. ETA=0:04:35
[32m[11/26 22:21:46 d2.evaluation.evaluator]: [0mInference done 1537/4952. 0.0775 s / img. ETA=0:04:30
[32m[11/26 22:21:51 d2.evaluation.evaluator]: [0mInference done 1600/4952. 0.0775 s / img. ETA=0:04:25
[32m[11/26 22:21:56 d2.evaluation.evaluator]: [0mInference done 1662/4952. 0.0776 s / img. ETA=0:04:21
[32m[11/26 22:22:01 d2.evaluation.evaluator]: [0mInference done 1726/4952. 0.0775 s / img. ETA=0:04:16
[32m[11/26 22:22:06 d2.evaluation.evaluator]: [0mInference done 1791/4952. 0.0775 s / img. ETA=0:04:10
[32m[11/26 22:22:11 d2.evaluation.evaluator]: [0mInference done 1855/4952. 0.0775 s / img. ETA=0:04:05
[32m[11/26 22:22:16 d2.evaluation.evaluator]: [0mInference done 1919/4952. 0.0775 s / img. ETA=0:04:00
[32m[11/26 22:22:21 d2.evaluation.evaluator]: [0mInference done 1981/4952. 0.0775 s / img. ETA=0:03:55
[32m[11/26 22:22:26 d2.evaluation.evaluator]: [0mInference done 2044/4952. 0.0775 s / img. ETA=0:03:50
[32m[11/26 22:22:31 d2.evaluation.evaluator]: [0mInference done 2107/4952. 0.0776 s / img. ETA=0:03:45
[32m[11/26 22:22:36 d2.evaluation.evaluator]: [0mInference done 2171/4952. 0.0775 s / img. ETA=0:03:40
[32m[11/26 22:22:41 d2.evaluation.evaluator]: [0mInference done 2234/4952. 0.0775 s / img. ETA=0:03:35
[32m[11/26 22:22:46 d2.evaluation.evaluator]: [0mInference done 2297/4952. 0.0776 s / img. ETA=0:03:30
[32m[11/26 22:22:51 d2.evaluation.evaluator]: [0mInference done 2360/4952. 0.0776 s / img. ETA=0:03:25
[32m[11/26 22:22:56 d2.evaluation.evaluator]: [0mInference done 2424/4952. 0.0776 s / img. ETA=0:03:20
[32m[11/26 22:23:01 d2.evaluation.evaluator]: [0mInference done 2487/4952. 0.0776 s / img. ETA=0:03:15
[32m[11/26 22:23:06 d2.evaluation.evaluator]: [0mInference done 2551/4952. 0.0776 s / img. ETA=0:03:10
[32m[11/26 22:23:11 d2.evaluation.evaluator]: [0mInference done 2614/4952. 0.0776 s / img. ETA=0:03:05
[32m[11/26 22:23:16 d2.evaluation.evaluator]: [0mInference done 2678/4952. 0.0776 s / img. ETA=0:03:00
[32m[11/26 22:23:22 d2.evaluation.evaluator]: [0mInference done 2742/4952. 0.0776 s / img. ETA=0:02:55
[32m[11/26 22:23:27 d2.evaluation.evaluator]: [0mInference done 2807/4952. 0.0775 s / img. ETA=0:02:50
[32m[11/26 22:23:32 d2.evaluation.evaluator]: [0mInference done 2871/4952. 0.0775 s / img. ETA=0:02:45
[32m[11/26 22:23:37 d2.evaluation.evaluator]: [0mInference done 2934/4952. 0.0775 s / img. ETA=0:02:40
[32m[11/26 22:23:42 d2.evaluation.evaluator]: [0mInference done 2998/4952. 0.0775 s / img. ETA=0:02:34
[32m[11/26 22:23:47 d2.evaluation.evaluator]: [0mInference done 3061/4952. 0.0775 s / img. ETA=0:02:30
[32m[11/26 22:23:52 d2.evaluation.evaluator]: [0mInference done 3125/4952. 0.0775 s / img. ETA=0:02:24
[32m[11/26 22:23:57 d2.evaluation.evaluator]: [0mInference done 3189/4952. 0.0775 s / img. ETA=0:02:19
[32m[11/26 22:24:02 d2.evaluation.evaluator]: [0mInference done 3252/4952. 0.0775 s / img. ETA=0:02:14
[32m[11/26 22:24:07 d2.evaluation.evaluator]: [0mInference done 3315/4952. 0.0775 s / img. ETA=0:02:09
[32m[11/26 22:24:12 d2.evaluation.evaluator]: [0mInference done 3378/4952. 0.0775 s / img. ETA=0:02:04
[32m[11/26 22:24:17 d2.evaluation.evaluator]: [0mInference done 3443/4952. 0.0775 s / img. ETA=0:01:59
[32m[11/26 22:24:22 d2.evaluation.evaluator]: [0mInference done 3506/4952. 0.0775 s / img. ETA=0:01:54
[32m[11/26 22:24:27 d2.evaluation.evaluator]: [0mInference done 3569/4952. 0.0775 s / img. ETA=0:01:49
[32m[11/26 22:24:32 d2.evaluation.evaluator]: [0mInference done 3632/4952. 0.0775 s / img. ETA=0:01:44
[32m[11/26 22:24:37 d2.evaluation.evaluator]: [0mInference done 3696/4952. 0.0775 s / img. ETA=0:01:39
[32m[11/26 22:24:42 d2.evaluation.evaluator]: [0mInference done 3760/4952. 0.0775 s / img. ETA=0:01:34
[32m[11/26 22:24:47 d2.evaluation.evaluator]: [0mInference done 3822/4952. 0.0775 s / img. ETA=0:01:29
[32m[11/26 22:24:52 d2.evaluation.evaluator]: [0mInference done 3886/4952. 0.0775 s / img. ETA=0:01:24
[32m[11/26 22:24:57 d2.evaluation.evaluator]: [0mInference done 3949/4952. 0.0775 s / img. ETA=0:01:19
[32m[11/26 22:25:02 d2.evaluation.evaluator]: [0mInference done 4013/4952. 0.0775 s / img. ETA=0:01:14
[32m[11/26 22:25:07 d2.evaluation.evaluator]: [0mInference done 4076/4952. 0.0775 s / img. ETA=0:01:09
[32m[11/26 22:25:12 d2.evaluation.evaluator]: [0mInference done 4139/4952. 0.0775 s / img. ETA=0:01:04
[32m[11/26 22:25:17 d2.evaluation.evaluator]: [0mInference done 4202/4952. 0.0776 s / img. ETA=0:00:59
[32m[11/26 22:25:22 d2.evaluation.evaluator]: [0mInference done 4265/4952. 0.0776 s / img. ETA=0:00:54
[32m[11/26 22:25:28 d2.evaluation.evaluator]: [0mInference done 4329/4952. 0.0776 s / img. ETA=0:00:49
[32m[11/26 22:25:33 d2.evaluation.evaluator]: [0mInference done 4393/4952. 0.0776 s / img. ETA=0:00:44
[32m[11/26 22:25:38 d2.evaluation.evaluator]: [0mInference done 4457/4952. 0.0776 s / img. ETA=0:00:39
[32m[11/26 22:25:43 d2.evaluation.evaluator]: [0mInference done 4521/4952. 0.0776 s / img. ETA=0:00:34
[32m[11/26 22:25:48 d2.evaluation.evaluator]: [0mInference done 4584/4952. 0.0776 s / img. ETA=0:00:29
[32m[11/26 22:25:53 d2.evaluation.evaluator]: [0mInference done 4648/4952. 0.0776 s / img. ETA=0:00:24
[32m[11/26 22:25:58 d2.evaluation.evaluator]: [0mInference done 4712/4952. 0.0775 s / img. ETA=0:00:19
[32m[11/26 22:26:03 d2.evaluation.evaluator]: [0mInference done 4775/4952. 0.0776 s / img. ETA=0:00:14
[32m[11/26 22:26:08 d2.evaluation.evaluator]: [0mInference done 4838/4952. 0.0776 s / img. ETA=0:00:09
[32m[11/26 22:26:13 d2.evaluation.evaluator]: [0mInference done 4901/4952. 0.0776 s / img. ETA=0:00:04
[32m[11/26 22:26:17 d2.evaluation.evaluator]: [0mTotal inference time: 0:06:32.709267 (0.079383 s / img per device, on 1 devices)
[32m[11/26 22:26:17 d2.evaluation.evaluator]: [0mTotal inference pure compute time: 0:06:23 (0.077551 s / img per device, on 1 devices)
[32m[11/26 22:26:17 detectron2]: [0mImage level evaluation complete for WR1_Mixed_Unknowns
[32m[11/26 22:26:17 detectron2]: [0mResults for WR1_Mixed_Unknowns
[32m[11/26 22:26:17 detectron2]: [0mAP for class no. 0: 0.0
[32m[11/26 22:26:17 detectron2]: [0mAP for class no. 1: 0.0
[32m[11/26 22:26:18 detectron2]: [0mAP for class no. 2: 0.0
[32m[11/26 22:26:18 detectron2]: [0mAP for class no. 3: 0.0
[32m[11/26 22:26:18 detectron2]: [0mAP for class no. 4: 0.0
[32m[11/26 22:26:18 detectron2]: [0mAP for class no. 5: 0.0
[32m[11/26 22:26:18 detectron2]: [0mAP for class no. 6: 0.0
[32m[11/26 22:26:18 detectron2]: [0mAP for class no. 7: 0.0
[32m[11/26 22:26:19 detectron2]: [0mAP for class no. 8: 0.0
[32m[11/26 22:26:19 detectron2]: [0mAP for class no. 9: 0.0
[32m[11/26 22:26:19 detectron2]: [0mAP for class no. 10: 0.0
[32m[11/26 22:26:19 detectron2]: [0mAP for class no. 11: 0.0
[32m[11/26 22:26:19 detectron2]: [0mAP for class no. 12: 0.0
[32m[11/26 22:26:19 detectron2]: [0mAP for class no. 13: 0.0
[32m[11/26 22:26:19 detectron2]: [0mAP for class no. 14: 0.0
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 15: 0.0
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 16: 0.0
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 17: 0.0
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 18: 0.0
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 19: 0.0
[32m[11/26 22:26:20 detectron2]: [0mmAP: 0.0
[32m[11/26 22:26:20 detectron2]: [0mCombined results for datasets custom_voc_2007_test, WR1_Mixed_Unknowns
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 0: 0.7918533086776733
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 1: 0.7716333866119385
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 2: 0.6370219588279724
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 3: 0.5901530385017395
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 4: 0.5282232165336609
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 5: 0.746399462223053
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 6: 0.7705965638160706
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 7: 0.8529040813446045
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 8: 0.5244924426078796
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 9: 0.6216275691986084
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 10: 0.4555111825466156
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 11: 0.7658306360244751
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 12: 0.7069307565689087
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 13: 0.8073904514312744
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 14: 0.7774863839149475
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 15: 0.40883171558380127
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 16: 0.6260425448417664
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 17: 0.5717162489891052
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 18: 0.8286178708076477
[32m[11/26 22:26:20 detectron2]: [0mAP for class no. 19: 0.6816982626914978
[32m[11/26 22:26:20 detectron2]: [0mmAP: 0.6732480525970459
[32m[11/26 22:26:21 detectron2]: [0m************************** Performance at Wilderness level 0.00 **************************
[32m[11/26 22:26:21 detectron2]: [0mAP for class no. 0 at wilderness 0.00: 0.8057809472084045
[32m[11/26 22:26:21 detectron2]: [0mAP for class no. 1 at wilderness 0.00: 0.7805687189102173
[32m[11/26 22:26:21 detectron2]: [0mAP for class no. 2 at wilderness 0.00: 0.6984018683433533
[32m[11/26 22:26:21 detectron2]: [0mAP for class no. 3 at wilderness 0.00: 0.6062182784080505
[32m[11/26 22:26:21 detectron2]: [0mAP for class no. 4 at wilderness 0.00: 0.5762754082679749
[32m[11/26 22:26:21 detectron2]: [0mAP for class no. 5 at wilderness 0.00: 0.7758022546768188
[32m[11/26 22:26:21 detectron2]: [0mAP for class no. 6 at wilderness 0.00: 0.7926622629165649
[32m[11/26 22:26:21 detectron2]: [0mAP for class no. 7 at wilderness 0.00: 0.8785473108291626
[32m[11/26 22:26:21 detectron2]: [0mAP for class no. 8 at wilderness 0.00: 0.5703489184379578
[32m[11/26 22:26:21 detectron2]: [0mAP for class no. 9 at wilderness 0.00: 0.7864148616790771
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 10 at wilderness 0.00: 0.6691756844520569
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 11 at wilderness 0.00: 0.8530338406562805
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 12 at wilderness 0.00: 0.8538536429405212
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 13 at wilderness 0.00: 0.8232013583183289
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 14 at wilderness 0.00: 0.786263108253479
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 15 at wilderness 0.00: 0.4942459166049957
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 16 at wilderness 0.00: 0.7428907155990601
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 17 at wilderness 0.00: 0.6694597601890564
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 18 at wilderness 0.00: 0.8503661751747131
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 19 at wilderness 0.00: 0.761128842830658
[32m[11/26 22:26:22 detectron2]: [0mmAP at wilderness 0.00: 0.7387319207191467
[32m[11/26 22:26:22 detectron2]: [0m************************** Performance at Wilderness level 0.10 **************************
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 0 at wilderness 0.10: 0.804438054561615
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 1 at wilderness 0.10: 0.7796733975410461
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 2 at wilderness 0.10: 0.6916602253913879
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 3 at wilderness 0.10: 0.6043260097503662
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 4 at wilderness 0.10: 0.5696216821670532
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 5 at wilderness 0.10: 0.7739158868789673
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 6 at wilderness 0.10: 0.7923885583877563
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 7 at wilderness 0.10: 0.8749563694000244
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 8 at wilderness 0.10: 0.566605806350708
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 9 at wilderness 0.10: 0.7556943893432617
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 10 at wilderness 0.10: 0.6324914693832397
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 11 at wilderness 0.10: 0.83938068151474
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 12 at wilderness 0.10: 0.8256282806396484
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 13 at wilderness 0.10: 0.8221734762191772
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 14 at wilderness 0.10: 0.7857226729393005
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 15 at wilderness 0.10: 0.4817371666431427
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 16 at wilderness 0.10: 0.7131674289703369
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 17 at wilderness 0.10: 0.6609698534011841
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 18 at wilderness 0.10: 0.8460453152656555
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 19 at wilderness 0.10: 0.752105176448822
[32m[11/26 22:26:22 detectron2]: [0mmAP at wilderness 0.10: 0.7286350131034851
[32m[11/26 22:26:22 detectron2]: [0m************************** Performance at Wilderness level 0.20 **************************
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 0 at wilderness 0.20: 0.8038345575332642
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 1 at wilderness 0.20: 0.7789458632469177
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 2 at wilderness 0.20: 0.6834487318992615
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 3 at wilderness 0.20: 0.6027707457542419
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 4 at wilderness 0.20: 0.5652671456336975
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 5 at wilderness 0.20: 0.7709394097328186
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 6 at wilderness 0.20: 0.7894852161407471
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 7 at wilderness 0.20: 0.8729369640350342
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 8 at wilderness 0.20: 0.5598958730697632
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 9 at wilderness 0.20: 0.726253092288971
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 10 at wilderness 0.20: 0.6147267818450928
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 11 at wilderness 0.20: 0.831853449344635
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 12 at wilderness 0.20: 0.8023135662078857
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 13 at wilderness 0.20: 0.8200926184654236
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 14 at wilderness 0.20: 0.7845827341079712
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 15 at wilderness 0.20: 0.4718787968158722
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 16 at wilderness 0.20: 0.6918453574180603
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 17 at wilderness 0.20: 0.6525976061820984
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 18 at wilderness 0.20: 0.8430576324462891
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 19 at wilderness 0.20: 0.742803156375885
[32m[11/26 22:26:22 detectron2]: [0mmAP at wilderness 0.20: 0.720476508140564
[32m[11/26 22:26:22 detectron2]: [0m************************** Performance at Wilderness level 0.30 **************************
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 0 at wilderness 0.30: 0.8021126389503479
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 1 at wilderness 0.30: 0.7785171270370483
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 2 at wilderness 0.30: 0.6781996488571167
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 3 at wilderness 0.30: 0.6018539071083069
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 4 at wilderness 0.30: 0.5566883683204651
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 5 at wilderness 0.30: 0.7685409784317017
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 6 at wilderness 0.30: 0.7867055535316467
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 7 at wilderness 0.30: 0.8666488528251648
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 8 at wilderness 0.30: 0.5532926321029663
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 9 at wilderness 0.30: 0.694472074508667
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 10 at wilderness 0.30: 0.5853824019432068
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 11 at wilderness 0.30: 0.8148987293243408
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 12 at wilderness 0.30: 0.7823802828788757
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 13 at wilderness 0.30: 0.818779706954956
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 14 at wilderness 0.30: 0.783781111240387
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 15 at wilderness 0.30: 0.4620145857334137
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 16 at wilderness 0.30: 0.6779978275299072
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 17 at wilderness 0.30: 0.6455773711204529
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 18 at wilderness 0.30: 0.8411329388618469
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 19 at wilderness 0.30: 0.7316110134124756
[32m[11/26 22:26:22 detectron2]: [0mmAP at wilderness 0.30: 0.7115293741226196
[32m[11/26 22:26:22 detectron2]: [0m************************** Performance at Wilderness level 0.40 **************************
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 0 at wilderness 0.40: 0.8021126389503479
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 1 at wilderness 0.40: 0.7769022583961487
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 2 at wilderness 0.40: 0.6708421111106873
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 3 at wilderness 0.40: 0.5991898775100708
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 4 at wilderness 0.40: 0.5525819659233093
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 5 at wilderness 0.40: 0.7674712538719177
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 6 at wilderness 0.40: 0.7853753566741943
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 7 at wilderness 0.40: 0.8651075959205627
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 8 at wilderness 0.40: 0.5485300421714783
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 9 at wilderness 0.40: 0.6755051016807556
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 10 at wilderness 0.40: 0.5651013255119324
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 11 at wilderness 0.40: 0.8073036074638367
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 12 at wilderness 0.40: 0.7736687064170837
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 13 at wilderness 0.40: 0.8166764974594116
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 14 at wilderness 0.40: 0.7825345396995544
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 15 at wilderness 0.40: 0.45249173045158386
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 16 at wilderness 0.40: 0.6693383455276489
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 17 at wilderness 0.40: 0.6324578523635864
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 18 at wilderness 0.40: 0.8387308716773987
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 19 at wilderness 0.40: 0.726614236831665
[32m[11/26 22:26:22 detectron2]: [0mmAP at wilderness 0.40: 0.7054267525672913
[32m[11/26 22:26:22 detectron2]: [0m************************** Performance at Wilderness level 0.50 **************************
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 0 at wilderness 0.50: 0.8012487888336182
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 1 at wilderness 0.50: 0.7758060097694397
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 2 at wilderness 0.50: 0.665831983089447
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 3 at wilderness 0.50: 0.5980117917060852
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 4 at wilderness 0.50: 0.5491288900375366
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 5 at wilderness 0.50: 0.7666521072387695
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 6 at wilderness 0.50: 0.7830488681793213
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 7 at wilderness 0.50: 0.8607292771339417
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 8 at wilderness 0.50: 0.5443422198295593
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 9 at wilderness 0.50: 0.6675994992256165
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 10 at wilderness 0.50: 0.5508967041969299
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 11 at wilderness 0.50: 0.8002014756202698
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 12 at wilderness 0.50: 0.7540932297706604
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 13 at wilderness 0.50: 0.8157699108123779
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 14 at wilderness 0.50: 0.7819458246231079
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 15 at wilderness 0.50: 0.4405312240123749
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 16 at wilderness 0.50: 0.6639074683189392
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 17 at wilderness 0.50: 0.6185230016708374
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 18 at wilderness 0.50: 0.836235761642456
[32m[11/26 22:26:22 detectron2]: [0mAP for class no. 19 at wilderness 0.50: 0.7149853110313416
[32m[11/26 22:26:22 detectron2]: [0mmAP at wilderness 0.50: 0.6994744539260864
[32m[11/26 22:26:22 detectron2]: [0m************************** Performance at Wilderness level 0.60 **************************
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 0 at wilderness 0.60: 0.7966757416725159
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 1 at wilderness 0.60: 0.7748817801475525
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 2 at wilderness 0.60: 0.6601483821868896
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 3 at wilderness 0.60: 0.5965762138366699
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 4 at wilderness 0.60: 0.5440265536308289
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 5 at wilderness 0.60: 0.7626188397407532
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 6 at wilderness 0.60: 0.7800941467285156
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 7 at wilderness 0.60: 0.8594475388526917
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 8 at wilderness 0.60: 0.5392826199531555
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 9 at wilderness 0.60: 0.6606490612030029
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 10 at wilderness 0.60: 0.5300838947296143
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 11 at wilderness 0.60: 0.7948688864707947
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 12 at wilderness 0.60: 0.7452663779258728
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 13 at wilderness 0.60: 0.8138160109519958
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 14 at wilderness 0.60: 0.7807357907295227
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 15 at wilderness 0.60: 0.42916011810302734
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 16 at wilderness 0.60: 0.6592159867286682
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 17 at wilderness 0.60: 0.6051133871078491
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 18 at wilderness 0.60: 0.8345475792884827
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 19 at wilderness 0.60: 0.7058286070823669
[32m[11/26 22:26:23 detectron2]: [0mmAP at wilderness 0.60: 0.6936518549919128
[32m[11/26 22:26:23 detectron2]: [0m************************** Performance at Wilderness level 0.70 **************************
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 0 at wilderness 0.70: 0.7958589196205139
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 1 at wilderness 0.70: 0.7733601927757263
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 2 at wilderness 0.70: 0.6554779410362244
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 3 at wilderness 0.70: 0.5954850316047668
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 4 at wilderness 0.70: 0.5386600494384766
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 5 at wilderness 0.70: 0.7579004764556885
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 6 at wilderness 0.70: 0.7753849625587463
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 7 at wilderness 0.70: 0.8579299449920654
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 8 at wilderness 0.70: 0.5357330441474915
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 9 at wilderness 0.70: 0.6533847451210022
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 10 at wilderness 0.70: 0.5144810080528259
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 11 at wilderness 0.70: 0.7861344814300537
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 12 at wilderness 0.70: 0.7389581203460693
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 13 at wilderness 0.70: 0.8119361996650696
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 14 at wilderness 0.70: 0.7797604203224182
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 15 at wilderness 0.70: 0.4227277636528015
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 16 at wilderness 0.70: 0.651122510433197
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 17 at wilderness 0.70: 0.5960981249809265
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 18 at wilderness 0.70: 0.8328948616981506
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 19 at wilderness 0.70: 0.6976178884506226
[32m[11/26 22:26:23 detectron2]: [0mmAP at wilderness 0.70: 0.6885453462600708
[32m[11/26 22:26:23 detectron2]: [0m************************** Performance at Wilderness level 0.80 **************************
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 0 at wilderness 0.80: 0.7941478490829468
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 1 at wilderness 0.80: 0.7725073099136353
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 2 at wilderness 0.80: 0.6468381285667419
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 3 at wilderness 0.80: 0.5937878489494324
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 4 at wilderness 0.80: 0.5349993705749512
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 5 at wilderness 0.80: 0.7492397427558899
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 6 at wilderness 0.80: 0.773811399936676
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 7 at wilderness 0.80: 0.8561802506446838
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 8 at wilderness 0.80: 0.5323456525802612
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 9 at wilderness 0.80: 0.6361750364303589
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 10 at wilderness 0.80: 0.48390886187553406
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 11 at wilderness 0.80: 0.7755957245826721
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 12 at wilderness 0.80: 0.7253042459487915
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 13 at wilderness 0.80: 0.809639573097229
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 14 at wilderness 0.80: 0.7793474197387695
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 15 at wilderness 0.80: 0.41905367374420166
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 16 at wilderness 0.80: 0.6441572904586792
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 17 at wilderness 0.80: 0.5905238389968872
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 18 at wilderness 0.80: 0.8312480449676514
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 19 at wilderness 0.80: 0.691842794418335
[32m[11/26 22:26:23 detectron2]: [0mmAP at wilderness 0.80: 0.6820327639579773
[32m[11/26 22:26:23 detectron2]: [0m************************** Performance at Wilderness level 0.90 **************************
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 0 at wilderness 0.90: 0.792736828327179
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 1 at wilderness 0.90: 0.7721492648124695
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 2 at wilderness 0.90: 0.6420716643333435
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 3 at wilderness 0.90: 0.5918503403663635
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 4 at wilderness 0.90: 0.5326953530311584
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 5 at wilderness 0.90: 0.7487021088600159
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 6 at wilderness 0.90: 0.7716047763824463
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 7 at wilderness 0.90: 0.854636549949646
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 8 at wilderness 0.90: 0.5296754240989685
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 9 at wilderness 0.90: 0.631446897983551
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 10 at wilderness 0.90: 0.4674086570739746
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 11 at wilderness 0.90: 0.7721824049949646
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 12 at wilderness 0.90: 0.7174472212791443
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 13 at wilderness 0.90: 0.8082637786865234
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 14 at wilderness 0.90: 0.7785699963569641
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 15 at wilderness 0.90: 0.41314542293548584
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 16 at wilderness 0.90: 0.6373736262321472
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 17 at wilderness 0.90: 0.5822394490242004
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 18 at wilderness 0.90: 0.8297340869903564
[32m[11/26 22:26:23 detectron2]: [0mAP for class no. 19 at wilderness 0.90: 0.6896778345108032
[32m[11/26 22:26:23 detectron2]: [0mmAP at wilderness 0.90: 0.6781805753707886
[32m[11/26 22:26:23 detectron2]: [0m************************** Performance at Wilderness level 1.00 **************************
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 0 at wilderness 1.00: 0.7918533086776733
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 1 at wilderness 1.00: 0.7716333866119385
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 2 at wilderness 1.00: 0.6370219588279724
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 3 at wilderness 1.00: 0.5901530385017395
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 4 at wilderness 1.00: 0.5282232165336609
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 5 at wilderness 1.00: 0.746399462223053
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 6 at wilderness 1.00: 0.7705965638160706
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 7 at wilderness 1.00: 0.8529040813446045
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 8 at wilderness 1.00: 0.5244924426078796
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 9 at wilderness 1.00: 0.6216275691986084
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 10 at wilderness 1.00: 0.4555111825466156
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 11 at wilderness 1.00: 0.7658306360244751
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 12 at wilderness 1.00: 0.7069307565689087
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 13 at wilderness 1.00: 0.8073904514312744
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 14 at wilderness 1.00: 0.7774851322174072
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 15 at wilderness 1.00: 0.40883171558380127
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 16 at wilderness 1.00: 0.6260425448417664
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 17 at wilderness 1.00: 0.5717162489891052
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 18 at wilderness 1.00: 0.8286178708076477
[32m[11/26 22:26:24 detectron2]: [0mAP for class no. 19 at wilderness 1.00: 0.6816982626914978
[32m[11/26 22:26:24 detectron2]: [0mmAP at wilderness 1.00: 0.6732479333877563
Could you confirm whether the discrepancy is reasonable on 4 GPUs.
@akshay-raj-dhamija , hello, can you explain how to obtain the Wilderness Impact in table 2 of the paper from the logging result?
Thanks,
Hello, I liked your work and I wanted to test the evaluation mechanism and the associated code. I executed the following command as mentioned in the README.md
But after the inference on 2476 images. The following error occurs:
Has there been any update to this code or do you have any inputs on how to fix this? Thank you