Vastlab / Elephant-of-object-detection

25 stars 8 forks source link

RuntimeError: All input tensors must be on the same device. #3

Closed deepaksinghcv closed 3 years ago

deepaksinghcv commented 3 years ago

Hello, I liked your work and I wanted to test the evaluation mechanism and the associated code. I executed the following command as mentioned in the README.md

python main.py --num-gpus 2 --config-file training_configs/faster_rcnn_R_50_FPN.yaml --resume --eval-only

But after the inference on 2476 images. The following error occurs:

[10/10 10:34:33 d2.evaluation.evaluator]: Start inference on 2476 images
[10/10 10:34:47 d2.evaluation.evaluator]: Inference done 11/2476. 0.1699 s / img. ETA=0:08:04
[10/10 10:34:53 d2.evaluation.evaluator]: Inference done 49/2476. 0.1263 s / img. ETA=0:05:44
[10/10 10:34:58 d2.evaluation.evaluator]: Inference done 84/2476. 0.1290 s / img. ETA=0:05:42
[10/10 10:35:03 d2.evaluation.evaluator]: Inference done 122/2476. 0.1262 s / img. ETA=0:05:30
[10/10 10:35:08 d2.evaluation.evaluator]: Inference done 160/2476. 0.1232 s / img. ETA=0:05:20
[10/10 10:35:13 d2.evaluation.evaluator]: Inference done 197/2476. 0.1227 s / img. ETA=0:05:13
[10/10 10:35:18 d2.evaluation.evaluator]: Inference done 234/2476. 0.1228 s / img. ETA=0:05:08
[10/10 10:35:23 d2.evaluation.evaluator]: Inference done 273/2476. 0.1213 s / img. ETA=0:05:00
[10/10 10:35:28 d2.evaluation.evaluator]: Inference done 311/2476. 0.1207 s / img. ETA=0:04:53
[10/10 10:35:33 d2.evaluation.evaluator]: Inference done 349/2476. 0.1206 s / img. ETA=0:04:47
[10/10 10:35:38 d2.evaluation.evaluator]: Inference done 386/2476. 0.1208 s / img. ETA=0:04:42
[10/10 10:35:43 d2.evaluation.evaluator]: Inference done 423/2476. 0.1213 s / img. ETA=0:04:38
[10/10 10:35:48 d2.evaluation.evaluator]: Inference done 460/2476. 0.1214 s / img. ETA=0:04:33
[10/10 10:35:53 d2.evaluation.evaluator]: Inference done 498/2476. 0.1211 s / img. ETA=0:04:27
[10/10 10:35:58 d2.evaluation.evaluator]: Inference done 535/2476. 0.1215 s / img. ETA=0:04:23
[10/10 10:36:03 d2.evaluation.evaluator]: Inference done 573/2476. 0.1215 s / img. ETA=0:04:18
[10/10 10:36:08 d2.evaluation.evaluator]: Inference done 611/2476. 0.1212 s / img. ETA=0:04:12
[10/10 10:36:13 d2.evaluation.evaluator]: Inference done 649/2476. 0.1211 s / img. ETA=0:04:07
[10/10 10:36:18 d2.evaluation.evaluator]: Inference done 687/2476. 0.1212 s / img. ETA=0:04:01
[10/10 10:36:24 d2.evaluation.evaluator]: Inference done 724/2476. 0.1211 s / img. ETA=0:03:56
[10/10 10:36:29 d2.evaluation.evaluator]: Inference done 762/2476. 0.1210 s / img. ETA=0:03:51
[10/10 10:36:34 d2.evaluation.evaluator]: Inference done 800/2476. 0.1212 s / img. ETA=0:03:46
[10/10 10:36:39 d2.evaluation.evaluator]: Inference done 839/2476. 0.1210 s / img. ETA=0:03:40
[10/10 10:36:44 d2.evaluation.evaluator]: Inference done 877/2476. 0.1211 s / img. ETA=0:03:35
[10/10 10:36:49 d2.evaluation.evaluator]: Inference done 915/2476. 0.1212 s / img. ETA=0:03:30
[10/10 10:36:54 d2.evaluation.evaluator]: Inference done 952/2476. 0.1213 s / img. ETA=0:03:25
[10/10 10:36:59 d2.evaluation.evaluator]: Inference done 990/2476. 0.1211 s / img. ETA=0:03:20
[10/10 10:37:04 d2.evaluation.evaluator]: Inference done 1028/2476. 0.1211 s / img. ETA=0:03:15
[10/10 10:37:09 d2.evaluation.evaluator]: Inference done 1067/2476. 0.1210 s / img. ETA=0:03:09
[10/10 10:37:14 d2.evaluation.evaluator]: Inference done 1105/2476. 0.1210 s / img. ETA=0:03:04
[10/10 10:37:20 d2.evaluation.evaluator]: Inference done 1142/2476. 0.1212 s / img. ETA=0:02:59
[10/10 10:37:25 d2.evaluation.evaluator]: Inference done 1179/2476. 0.1213 s / img. ETA=0:02:54
[10/10 10:37:30 d2.evaluation.evaluator]: Inference done 1217/2476. 0.1212 s / img. ETA=0:02:49
[10/10 10:37:35 d2.evaluation.evaluator]: Inference done 1255/2476. 0.1213 s / img. ETA=0:02:44
[10/10 10:37:40 d2.evaluation.evaluator]: Inference done 1293/2476. 0.1212 s / img. ETA=0:02:39
[10/10 10:37:45 d2.evaluation.evaluator]: Inference done 1330/2476. 0.1213 s / img. ETA=0:02:34
[10/10 10:37:50 d2.evaluation.evaluator]: Inference done 1368/2476. 0.1212 s / img. ETA=0:02:29
[10/10 10:37:55 d2.evaluation.evaluator]: Inference done 1406/2476. 0.1211 s / img. ETA=0:02:23
[10/10 10:38:00 d2.evaluation.evaluator]: Inference done 1444/2476. 0.1211 s / img. ETA=0:02:18
[10/10 10:38:05 d2.evaluation.evaluator]: Inference done 1482/2476. 0.1212 s / img. ETA=0:02:13
[10/10 10:38:10 d2.evaluation.evaluator]: Inference done 1520/2476. 0.1211 s / img. ETA=0:02:08
[10/10 10:38:15 d2.evaluation.evaluator]: Inference done 1557/2476. 0.1212 s / img. ETA=0:02:03
[10/10 10:38:20 d2.evaluation.evaluator]: Inference done 1594/2476. 0.1212 s / img. ETA=0:01:58
[10/10 10:38:25 d2.evaluation.evaluator]: Inference done 1633/2476. 0.1211 s / img. ETA=0:01:53
[10/10 10:38:30 d2.evaluation.evaluator]: Inference done 1672/2476. 0.1210 s / img. ETA=0:01:48
[10/10 10:38:35 d2.evaluation.evaluator]: Inference done 1709/2476. 0.1211 s / img. ETA=0:01:43
[10/10 10:38:41 d2.evaluation.evaluator]: Inference done 1747/2476. 0.1211 s / img. ETA=0:01:38
[10/10 10:38:46 d2.evaluation.evaluator]: Inference done 1785/2476. 0.1211 s / img. ETA=0:01:32
[10/10 10:38:51 d2.evaluation.evaluator]: Inference done 1822/2476. 0.1211 s / img. ETA=0:01:27
[10/10 10:38:56 d2.evaluation.evaluator]: Inference done 1860/2476. 0.1211 s / img. ETA=0:01:22
[10/10 10:39:01 d2.evaluation.evaluator]: Inference done 1898/2476. 0.1211 s / img. ETA=0:01:17
[10/10 10:39:06 d2.evaluation.evaluator]: Inference done 1935/2476. 0.1211 s / img. ETA=0:01:12
[10/10 10:39:11 d2.evaluation.evaluator]: Inference done 1973/2476. 0.1211 s / img. ETA=0:01:07
[10/10 10:39:16 d2.evaluation.evaluator]: Inference done 2011/2476. 0.1211 s / img. ETA=0:01:02
[10/10 10:39:21 d2.evaluation.evaluator]: Inference done 2048/2476. 0.1212 s / img. ETA=0:00:57
[10/10 10:39:26 d2.evaluation.evaluator]: Inference done 2086/2476. 0.1212 s / img. ETA=0:00:52
[10/10 10:39:31 d2.evaluation.evaluator]: Inference done 2124/2476. 0.1212 s / img. ETA=0:00:47
[10/10 10:39:36 d2.evaluation.evaluator]: Inference done 2162/2476. 0.1212 s / img. ETA=0:00:42
[10/10 10:39:41 d2.evaluation.evaluator]: Inference done 2201/2476. 0.1212 s / img. ETA=0:00:36
[10/10 10:39:47 d2.evaluation.evaluator]: Inference done 2238/2476. 0.1212 s / img. ETA=0:00:31
[10/10 10:39:52 d2.evaluation.evaluator]: Inference done 2276/2476. 0.1213 s / img. ETA=0:00:26
[10/10 10:39:57 d2.evaluation.evaluator]: Inference done 2313/2476. 0.1213 s / img. ETA=0:00:21
[10/10 10:40:02 d2.evaluation.evaluator]: Inference done 2351/2476. 0.1213 s / img. ETA=0:00:16
[10/10 10:40:07 d2.evaluation.evaluator]: Inference done 2389/2476. 0.1211 s / img. ETA=0:00:11
[10/10 10:40:12 d2.evaluation.evaluator]: Inference done 2426/2476. 0.1212 s / img. ETA=0:00:06
[10/10 10:40:17 d2.evaluation.evaluator]: Inference done 2464/2476. 0.1212 s / img. ETA=0:00:01
[10/10 10:40:19 d2.evaluation.evaluator]: Total inference time: 0:05:33.130279 (0.134816 s / img per device, on 2 devices)
[10/10 10:40:19 d2.evaluation.evaluator]: Total inference pure compute time: 0:04:59 (0.121209 s / img per device, on 2 devices)
/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/storage.py:34: FutureWarning: pickle support for Storage will be removed in 1.5. Use `torch.save` instead
  warnings.warn("pickle support for Storage will be removed in 1.5. Use `torch.save` instead", FutureWarning)
/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/storage.py:34: FutureWarning: pickle support for Storage will be removed in 1.5. Use `torch.save` instead
  warnings.warn("pickle support for Storage will be removed in 1.5. Use `torch.save` instead", FutureWarning)
[10/10 10:40:29 detectron2]: Image level evaluation complete for custom_voc_2007_test
[10/10 10:40:29 detectron2]: Results for custom_voc_2007_test
Traceback (most recent call last):
  File "main.py", line 198, in <module>
    args=(args,),
  File "/home/dksingh/objdet/detectron2/detectron2/engine/launch.py", line 59, in launch
    daemon=False,
  File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 200, in spawn
    return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
  File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
    while not context.join():
  File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 119, in join
    raise Exception(msg)
Exception:

-- Process 0 terminated with the following error:
Traceback (most recent call last):
  File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
    fn(i, *args)
  File "/home/dksingh/objdet/detectron2/detectron2/engine/launch.py", line 94, in _distributed_worker
    main_func(*args)
  File "/home/dksingh/paper_impl/Elephant-of-object-detection/main.py", line 177, in main
    return do_test(cfg, model)
  File "/home/dksingh/paper_impl/Elephant-of-object-detection/main.py", line 63, in do_test
    evaluator._coco_api.cats)
  File "/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py", line 42, in only_mAP_analysis
    scores = torch.cat(scores)
RuntimeError: All input tensors must be on the same device. Received cuda:0 and cuda:1

Has there been any update to this code or do you have any inputs on how to fix this? Thank you

deepaksinghcv commented 3 years ago

I later executed the code on a single GPU as following:

python main.py --num-gpus 1 --config-file training_configs/faster_rcnn_R_50_FPN.yaml --resume --eval-only

It throws the following error:


[10/10 10:56:44 d2.evaluation.evaluator]: Start inference on 4952 images
[10/10 10:56:45 d2.evaluation.evaluator]: Inference done 11/4952. 0.0779 s / img. ETA=0:06:34
[10/10 10:56:50 d2.evaluation.evaluator]: Inference done 72/4952. 0.0798 s / img. ETA=0:06:40
[10/10 10:56:55 d2.evaluation.evaluator]: Inference done 136/4952. 0.0782 s / img. ETA=0:06:28
[10/10 10:57:00 d2.evaluation.evaluator]: Inference done 200/4952. 0.0774 s / img. ETA=0:06:19
[10/10 10:57:05 d2.evaluation.evaluator]: Inference done 264/4952. 0.0771 s / img. ETA=0:06:12
[10/10 10:57:10 d2.evaluation.evaluator]: Inference done 328/4952. 0.0769 s / img. ETA=0:06:06
[10/10 10:57:16 d2.evaluation.evaluator]: Inference done 392/4952. 0.0769 s / img. ETA=0:06:01
[10/10 10:57:21 d2.evaluation.evaluator]: Inference done 456/4952. 0.0767 s / img. ETA=0:05:55
[10/10 10:57:26 d2.evaluation.evaluator]: Inference done 519/4952. 0.0768 s / img. ETA=0:05:50
[10/10 10:57:31 d2.evaluation.evaluator]: Inference done 583/4952. 0.0768 s / img. ETA=0:05:45
[10/10 10:57:36 d2.evaluation.evaluator]: Inference done 647/4952. 0.0767 s / img. ETA=0:05:40
[10/10 10:57:41 d2.evaluation.evaluator]: Inference done 710/4952. 0.0768 s / img. ETA=0:05:35
[10/10 10:57:46 d2.evaluation.evaluator]: Inference done 774/4952. 0.0767 s / img. ETA=0:05:30
[10/10 10:57:51 d2.evaluation.evaluator]: Inference done 838/4952. 0.0767 s / img. ETA=0:05:25
[10/10 10:57:56 d2.evaluation.evaluator]: Inference done 902/4952. 0.0767 s / img. ETA=0:05:20
[10/10 10:58:01 d2.evaluation.evaluator]: Inference done 966/4952. 0.0767 s / img. ETA=0:05:15
[10/10 10:58:06 d2.evaluation.evaluator]: Inference done 1029/4952. 0.0767 s / img. ETA=0:05:10
[10/10 10:58:11 d2.evaluation.evaluator]: Inference done 1093/4952. 0.0767 s / img. ETA=0:05:05
[10/10 10:58:16 d2.evaluation.evaluator]: Inference done 1156/4952. 0.0768 s / img. ETA=0:05:00
[10/10 10:58:21 d2.evaluation.evaluator]: Inference done 1220/4952. 0.0768 s / img. ETA=0:04:55
[10/10 10:58:26 d2.evaluation.evaluator]: Inference done 1283/4952. 0.0768 s / img. ETA=0:04:50
[10/10 10:58:31 d2.evaluation.evaluator]: Inference done 1347/4952. 0.0768 s / img. ETA=0:04:45
[10/10 10:58:36 d2.evaluation.evaluator]: Inference done 1411/4952. 0.0768 s / img. ETA=0:04:40
[10/10 10:58:41 d2.evaluation.evaluator]: Inference done 1474/4952. 0.0768 s / img. ETA=0:04:35
[10/10 10:58:46 d2.evaluation.evaluator]: Inference done 1537/4952. 0.0769 s / img. ETA=0:04:30
[10/10 10:58:51 d2.evaluation.evaluator]: Inference done 1600/4952. 0.0769 s / img. ETA=0:04:25
[10/10 10:58:56 d2.evaluation.evaluator]: Inference done 1663/4952. 0.0769 s / img. ETA=0:04:20
[10/10 10:59:01 d2.evaluation.evaluator]: Inference done 1727/4952. 0.0769 s / img. ETA=0:04:15
[10/10 10:59:06 d2.evaluation.evaluator]: Inference done 1791/4952. 0.0769 s / img. ETA=0:04:10
[10/10 10:59:12 d2.evaluation.evaluator]: Inference done 1855/4952. 0.0769 s / img. ETA=0:04:05
[10/10 10:59:17 d2.evaluation.evaluator]: Inference done 1919/4952. 0.0769 s / img. ETA=0:04:00
[10/10 10:59:22 d2.evaluation.evaluator]: Inference done 1983/4952. 0.0769 s / img. ETA=0:03:55
[10/10 10:59:27 d2.evaluation.evaluator]: Inference done 2047/4952. 0.0769 s / img. ETA=0:03:50
[10/10 10:59:32 d2.evaluation.evaluator]: Inference done 2110/4952. 0.0769 s / img. ETA=0:03:45
[10/10 10:59:37 d2.evaluation.evaluator]: Inference done 2174/4952. 0.0769 s / img. ETA=0:03:40
[10/10 10:59:42 d2.evaluation.evaluator]: Inference done 2238/4952. 0.0769 s / img. ETA=0:03:35
[10/10 10:59:47 d2.evaluation.evaluator]: Inference done 2301/4952. 0.0769 s / img. ETA=0:03:30
[10/10 10:59:52 d2.evaluation.evaluator]: Inference done 2364/4952. 0.0769 s / img. ETA=0:03:25
[10/10 10:59:57 d2.evaluation.evaluator]: Inference done 2427/4952. 0.0769 s / img. ETA=0:03:20
[10/10 11:00:02 d2.evaluation.evaluator]: Inference done 2490/4952. 0.0769 s / img. ETA=0:03:15
[10/10 11:00:07 d2.evaluation.evaluator]: Inference done 2553/4952. 0.0770 s / img. ETA=0:03:10
[10/10 11:00:12 d2.evaluation.evaluator]: Inference done 2616/4952. 0.0770 s / img. ETA=0:03:05
[10/10 11:00:17 d2.evaluation.evaluator]: Inference done 2678/4952. 0.0770 s / img. ETA=0:03:00
[10/10 11:00:22 d2.evaluation.evaluator]: Inference done 2741/4952. 0.0770 s / img. ETA=0:02:55
[10/10 11:00:27 d2.evaluation.evaluator]: Inference done 2804/4952. 0.0770 s / img. ETA=0:02:50
[10/10 11:00:32 d2.evaluation.evaluator]: Inference done 2867/4952. 0.0770 s / img. ETA=0:02:45
[10/10 11:00:37 d2.evaluation.evaluator]: Inference done 2926/4952. 0.0771 s / img. ETA=0:02:41
[10/10 11:00:42 d2.evaluation.evaluator]: Inference done 2989/4952. 0.0772 s / img. ETA=0:02:36
[10/10 11:00:47 d2.evaluation.evaluator]: Inference done 3053/4952. 0.0772 s / img. ETA=0:02:31
[10/10 11:00:52 d2.evaluation.evaluator]: Inference done 3116/4952. 0.0772 s / img. ETA=0:02:26
[10/10 11:00:57 d2.evaluation.evaluator]: Inference done 3179/4952. 0.0772 s / img. ETA=0:02:21
[10/10 11:01:02 d2.evaluation.evaluator]: Inference done 3242/4952. 0.0772 s / img. ETA=0:02:16
[10/10 11:01:08 d2.evaluation.evaluator]: Inference done 3305/4952. 0.0772 s / img. ETA=0:02:11
[10/10 11:01:13 d2.evaluation.evaluator]: Inference done 3368/4952. 0.0772 s / img. ETA=0:02:06
[10/10 11:01:18 d2.evaluation.evaluator]: Inference done 3431/4952. 0.0772 s / img. ETA=0:02:01
[10/10 11:01:23 d2.evaluation.evaluator]: Inference done 3494/4952. 0.0772 s / img. ETA=0:01:56
[10/10 11:01:28 d2.evaluation.evaluator]: Inference done 3557/4952. 0.0772 s / img. ETA=0:01:51
[10/10 11:01:33 d2.evaluation.evaluator]: Inference done 3619/4952. 0.0773 s / img. ETA=0:01:46
[10/10 11:01:38 d2.evaluation.evaluator]: Inference done 3682/4952. 0.0773 s / img. ETA=0:01:41
[10/10 11:01:43 d2.evaluation.evaluator]: Inference done 3745/4952. 0.0773 s / img. ETA=0:01:36
[10/10 11:01:48 d2.evaluation.evaluator]: Inference done 3808/4952. 0.0773 s / img. ETA=0:01:31
[10/10 11:01:53 d2.evaluation.evaluator]: Inference done 3872/4952. 0.0773 s / img. ETA=0:01:26
[10/10 11:01:58 d2.evaluation.evaluator]: Inference done 3935/4952. 0.0773 s / img. ETA=0:01:21
[10/10 11:02:03 d2.evaluation.evaluator]: Inference done 3998/4952. 0.0773 s / img. ETA=0:01:16
[10/10 11:02:08 d2.evaluation.evaluator]: Inference done 4061/4952. 0.0773 s / img. ETA=0:01:10
[10/10 11:02:13 d2.evaluation.evaluator]: Inference done 4125/4952. 0.0773 s / img. ETA=0:01:05
[10/10 11:02:18 d2.evaluation.evaluator]: Inference done 4188/4952. 0.0773 s / img. ETA=0:01:00
[10/10 11:02:23 d2.evaluation.evaluator]: Inference done 4251/4952. 0.0773 s / img. ETA=0:00:55
[10/10 11:02:28 d2.evaluation.evaluator]: Inference done 4313/4952. 0.0773 s / img. ETA=0:00:50
[10/10 11:02:33 d2.evaluation.evaluator]: Inference done 4375/4952. 0.0773 s / img. ETA=0:00:46
[10/10 11:02:38 d2.evaluation.evaluator]: Inference done 4439/4952. 0.0773 s / img. ETA=0:00:40
[10/10 11:02:43 d2.evaluation.evaluator]: Inference done 4501/4952. 0.0774 s / img. ETA=0:00:35
[10/10 11:02:48 d2.evaluation.evaluator]: Inference done 4564/4952. 0.0774 s / img. ETA=0:00:30
[10/10 11:02:54 d2.evaluation.evaluator]: Inference done 4627/4952. 0.0774 s / img. ETA=0:00:25
[10/10 11:02:59 d2.evaluation.evaluator]: Inference done 4690/4952. 0.0774 s / img. ETA=0:00:20
[10/10 11:03:04 d2.evaluation.evaluator]: Inference done 4753/4952. 0.0774 s / img. ETA=0:00:15
[10/10 11:03:09 d2.evaluation.evaluator]: Inference done 4816/4952. 0.0774 s / img. ETA=0:00:10
[10/10 11:03:14 d2.evaluation.evaluator]: Inference done 4879/4952. 0.0774 s / img. ETA=0:00:05
[10/10 11:03:19 d2.evaluation.evaluator]: Inference done 4942/4952. 0.0774 s / img. ETA=0:00:00
[10/10 11:03:19 d2.evaluation.evaluator]: Total inference time: 0:06:34.603606 (0.079766 s / img per device, on 1 devices)
[10/10 11:03:19 d2.evaluation.evaluator]: Total inference pure compute time: 0:06:22 (0.077365 s / img per device, on 1 devices)
[10/10 11:03:20 detectron2]: Image level evaluation complete for custom_voc_2007_test
[10/10 11:03:20 detectron2]: Results for custom_voc_2007_test
Traceback (most recent call last):
  File "main.py", line 198, in <module>
    args=(args,),
  File "/home/dksingh/objdet/detectron2/detectron2/engine/launch.py", line 62, in launch
    main_func(*args)
  File "main.py", line 177, in main
    return do_test(cfg, model)
  File "main.py", line 63, in do_test
    evaluator._coco_api.cats)
  File "/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py", line 55, in only_mAP_analysis
    PR_plotter(Precision, Recall, categories[cls_no+1]['name'], ap)
  File "/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py", line 17, in PR_plotter
    plt.savefig(f"PR/{cls_name}_Precision_recall.pdf", bbox_inches="tight")
  File "/home/dksingh/.local/lib/python3.7/site-packages/matplotlib/pyplot.py", line 722, in savefig
    res = fig.savefig(*args, **kwargs)
  File "/home/dksingh/.local/lib/python3.7/site-packages/matplotlib/figure.py", line 2180, in savefig
    self.canvas.print_figure(fname, **kwargs)
  File "/home/dksingh/.local/lib/python3.7/site-packages/matplotlib/backend_bases.py", line 2082, in print_figure
    **kwargs)
  File "/home/dksingh/.local/lib/python3.7/site-packages/matplotlib/backends/backend_pdf.py", line 2496, in print_pdf
    file = PdfFile(filename, metadata=metadata)
  File "/home/dksingh/.local/lib/python3.7/site-packages/matplotlib/backends/backend_pdf.py", line 432, in __init__
    fh, opened = cbook.to_filehandle(filename, "wb", return_opened=True)
  File "/home/dksingh/.local/lib/python3.7/site-packages/matplotlib/cbook/__init__.py", line 432, in to_filehandle
    fh = open(fname, flag, encoding=encoding)
FileNotFoundError: [Errno 2] No such file or directory: 'PR/aeroplane_Precision_recall.pdf'

Are there any missing files in the repo?

deepaksinghcv commented 3 years ago

I touched PR/aeroplane_Precision_recall.pdf just to check what happens with the following code:

python main.py --num-gpus 1 --config-file training_configs/faster_rcnn_R_50_FPN.yaml --resume --eval-only

Log:

[10/10 11:15:08 d2.data.common]: Serializing 4952 elements to byte tensors and concatenating them all ...
[10/10 11:15:08 d2.data.common]: Serialized dataset takes 1.87 MiB
[10/10 11:15:08 d2.data.dataset_mapper]: Augmentations used in training: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[10/10 11:15:08 d2.evaluation.evaluator]: Start inference on 4952 images
[10/10 11:15:09 d2.evaluation.evaluator]: Inference done 11/4952. 0.0750 s / img. ETA=0:06:19
[10/10 11:15:14 d2.evaluation.evaluator]: Inference done 73/4952. 0.0785 s / img. ETA=0:06:34
[10/10 11:15:19 d2.evaluation.evaluator]: Inference done 137/4952. 0.0773 s / img. ETA=0:06:23
[10/10 11:15:24 d2.evaluation.evaluator]: Inference done 202/4952. 0.0767 s / img. ETA=0:06:15
[10/10 11:15:29 d2.evaluation.evaluator]: Inference done 266/4952. 0.0766 s / img. ETA=0:06:10
[10/10 11:15:34 d2.evaluation.evaluator]: Inference done 330/4952. 0.0765 s / img. ETA=0:06:04
[10/10 11:15:39 d2.evaluation.evaluator]: Inference done 394/4952. 0.0766 s / img. ETA=0:05:59
[10/10 11:15:44 d2.evaluation.evaluator]: Inference done 458/4952. 0.0766 s / img. ETA=0:05:54
[10/10 11:15:50 d2.evaluation.evaluator]: Inference done 522/4952. 0.0765 s / img. ETA=0:05:49
[10/10 11:15:55 d2.evaluation.evaluator]: Inference done 586/4952. 0.0765 s / img. ETA=0:05:44
[10/10 11:16:00 d2.evaluation.evaluator]: Inference done 650/4952. 0.0765 s / img. ETA=0:05:39
[10/10 11:16:05 d2.evaluation.evaluator]: Inference done 713/4952. 0.0766 s / img. ETA=0:05:34
[10/10 11:16:10 d2.evaluation.evaluator]: Inference done 777/4952. 0.0766 s / img. ETA=0:05:29
[10/10 11:16:15 d2.evaluation.evaluator]: Inference done 841/4952. 0.0765 s / img. ETA=0:05:24
[10/10 11:16:20 d2.evaluation.evaluator]: Inference done 905/4952. 0.0765 s / img. ETA=0:05:19
[10/10 11:16:25 d2.evaluation.evaluator]: Inference done 969/4952. 0.0765 s / img. ETA=0:05:14
[10/10 11:16:30 d2.evaluation.evaluator]: Inference done 1033/4952. 0.0766 s / img. ETA=0:05:09
[10/10 11:16:35 d2.evaluation.evaluator]: Inference done 1096/4952. 0.0766 s / img. ETA=0:05:04
[10/10 11:16:40 d2.evaluation.evaluator]: Inference done 1159/4952. 0.0767 s / img. ETA=0:04:59
[10/10 11:16:45 d2.evaluation.evaluator]: Inference done 1223/4952. 0.0766 s / img. ETA=0:04:54
[10/10 11:16:50 d2.evaluation.evaluator]: Inference done 1287/4952. 0.0767 s / img. ETA=0:04:49
[10/10 11:16:55 d2.evaluation.evaluator]: Inference done 1351/4952. 0.0767 s / img. ETA=0:04:44
[10/10 11:17:00 d2.evaluation.evaluator]: Inference done 1415/4952. 0.0767 s / img. ETA=0:04:39
[10/10 11:17:05 d2.evaluation.evaluator]: Inference done 1478/4952. 0.0767 s / img. ETA=0:04:34
[10/10 11:17:10 d2.evaluation.evaluator]: Inference done 1542/4952. 0.0767 s / img. ETA=0:04:29
[10/10 11:17:15 d2.evaluation.evaluator]: Inference done 1605/4952. 0.0768 s / img. ETA=0:04:24
[10/10 11:17:20 d2.evaluation.evaluator]: Inference done 1669/4952. 0.0767 s / img. ETA=0:04:19
[10/10 11:17:25 d2.evaluation.evaluator]: Inference done 1732/4952. 0.0767 s / img. ETA=0:04:14
[10/10 11:17:30 d2.evaluation.evaluator]: Inference done 1796/4952. 0.0767 s / img. ETA=0:04:09
[10/10 11:17:35 d2.evaluation.evaluator]: Inference done 1860/4952. 0.0767 s / img. ETA=0:04:04
[10/10 11:17:40 d2.evaluation.evaluator]: Inference done 1924/4952. 0.0767 s / img. ETA=0:03:59
[10/10 11:17:46 d2.evaluation.evaluator]: Inference done 1988/4952. 0.0767 s / img. ETA=0:03:54
[10/10 11:17:51 d2.evaluation.evaluator]: Inference done 2052/4952. 0.0767 s / img. ETA=0:03:49
[10/10 11:17:56 d2.evaluation.evaluator]: Inference done 2115/4952. 0.0767 s / img. ETA=0:03:44
[10/10 11:18:01 d2.evaluation.evaluator]: Inference done 2179/4952. 0.0767 s / img. ETA=0:03:39
[10/10 11:18:06 d2.evaluation.evaluator]: Inference done 2242/4952. 0.0768 s / img. ETA=0:03:34
[10/10 11:18:11 d2.evaluation.evaluator]: Inference done 2305/4952. 0.0768 s / img. ETA=0:03:29
[10/10 11:18:16 d2.evaluation.evaluator]: Inference done 2369/4952. 0.0768 s / img. ETA=0:03:24
[10/10 11:18:21 d2.evaluation.evaluator]: Inference done 2432/4952. 0.0768 s / img. ETA=0:03:19
[10/10 11:18:26 d2.evaluation.evaluator]: Inference done 2495/4952. 0.0768 s / img. ETA=0:03:14
[10/10 11:18:31 d2.evaluation.evaluator]: Inference done 2558/4952. 0.0769 s / img. ETA=0:03:09
[10/10 11:18:36 d2.evaluation.evaluator]: Inference done 2622/4952. 0.0769 s / img. ETA=0:03:04
[10/10 11:18:41 d2.evaluation.evaluator]: Inference done 2684/4952. 0.0769 s / img. ETA=0:02:59
[10/10 11:18:46 d2.evaluation.evaluator]: Inference done 2746/4952. 0.0769 s / img. ETA=0:02:54
[10/10 11:18:51 d2.evaluation.evaluator]: Inference done 2810/4952. 0.0769 s / img. ETA=0:02:49
[10/10 11:18:56 d2.evaluation.evaluator]: Inference done 2873/4952. 0.0770 s / img. ETA=0:02:44
[10/10 11:19:01 d2.evaluation.evaluator]: Inference done 2936/4952. 0.0770 s / img. ETA=0:02:39
[10/10 11:19:06 d2.evaluation.evaluator]: Inference done 2998/4952. 0.0770 s / img. ETA=0:02:35
[10/10 11:19:11 d2.evaluation.evaluator]: Inference done 3061/4952. 0.0770 s / img. ETA=0:02:30
[10/10 11:19:16 d2.evaluation.evaluator]: Inference done 3124/4952. 0.0770 s / img. ETA=0:02:25
[10/10 11:19:22 d2.evaluation.evaluator]: Inference done 3185/4952. 0.0771 s / img. ETA=0:02:20
[10/10 11:19:27 d2.evaluation.evaluator]: Inference done 3248/4952. 0.0771 s / img. ETA=0:02:15
[10/10 11:19:32 d2.evaluation.evaluator]: Inference done 3311/4952. 0.0772 s / img. ETA=0:02:10
[10/10 11:19:37 d2.evaluation.evaluator]: Inference done 3374/4952. 0.0772 s / img. ETA=0:02:05
[10/10 11:19:42 d2.evaluation.evaluator]: Inference done 3438/4952. 0.0772 s / img. ETA=0:02:00
[10/10 11:19:47 d2.evaluation.evaluator]: Inference done 3500/4952. 0.0772 s / img. ETA=0:01:55
[10/10 11:19:52 d2.evaluation.evaluator]: Inference done 3562/4952. 0.0772 s / img. ETA=0:01:50
[10/10 11:19:57 d2.evaluation.evaluator]: Inference done 3625/4952. 0.0772 s / img. ETA=0:01:45
[10/10 11:20:02 d2.evaluation.evaluator]: Inference done 3688/4952. 0.0772 s / img. ETA=0:01:40
[10/10 11:20:07 d2.evaluation.evaluator]: Inference done 3751/4952. 0.0772 s / img. ETA=0:01:35
[10/10 11:20:12 d2.evaluation.evaluator]: Inference done 3813/4952. 0.0773 s / img. ETA=0:01:30
[10/10 11:20:17 d2.evaluation.evaluator]: Inference done 3876/4952. 0.0773 s / img. ETA=0:01:25
[10/10 11:20:22 d2.evaluation.evaluator]: Inference done 3938/4952. 0.0773 s / img. ETA=0:01:20
[10/10 11:20:27 d2.evaluation.evaluator]: Inference done 4001/4952. 0.0773 s / img. ETA=0:01:15
[10/10 11:20:32 d2.evaluation.evaluator]: Inference done 4064/4952. 0.0773 s / img. ETA=0:01:10
[10/10 11:20:37 d2.evaluation.evaluator]: Inference done 4128/4952. 0.0773 s / img. ETA=0:01:05
[10/10 11:20:42 d2.evaluation.evaluator]: Inference done 4190/4952. 0.0773 s / img. ETA=0:01:00
[10/10 11:20:47 d2.evaluation.evaluator]: Inference done 4253/4952. 0.0773 s / img. ETA=0:00:55
[10/10 11:20:52 d2.evaluation.evaluator]: Inference done 4315/4952. 0.0773 s / img. ETA=0:00:50
[10/10 11:20:57 d2.evaluation.evaluator]: Inference done 4377/4952. 0.0773 s / img. ETA=0:00:45
[10/10 11:21:02 d2.evaluation.evaluator]: Inference done 4440/4952. 0.0774 s / img. ETA=0:00:40
[10/10 11:21:07 d2.evaluation.evaluator]: Inference done 4502/4952. 0.0774 s / img. ETA=0:00:35
[10/10 11:21:12 d2.evaluation.evaluator]: Inference done 4564/4952. 0.0774 s / img. ETA=0:00:30
[10/10 11:21:17 d2.evaluation.evaluator]: Inference done 4627/4952. 0.0774 s / img. ETA=0:00:25
[10/10 11:21:22 d2.evaluation.evaluator]: Inference done 4689/4952. 0.0774 s / img. ETA=0:00:20
[10/10 11:21:27 d2.evaluation.evaluator]: Inference done 4753/4952. 0.0774 s / img. ETA=0:00:15
[10/10 11:21:33 d2.evaluation.evaluator]: Inference done 4816/4952. 0.0774 s / img. ETA=0:00:10
[10/10 11:21:38 d2.evaluation.evaluator]: Inference done 4879/4952. 0.0774 s / img. ETA=0:00:05
[10/10 11:21:43 d2.evaluation.evaluator]: Inference done 4942/4952. 0.0774 s / img. ETA=0:00:00
[10/10 11:21:43 d2.evaluation.evaluator]: Total inference time: 0:06:34.775224 (0.079801 s / img per device, on 1 devices)
[10/10 11:21:43 d2.evaluation.evaluator]: Total inference pure compute time: 0:06:22 (0.077401 s / img per device, on 1 devices)
[10/10 11:21:44 detectron2]: Image level evaluation complete for custom_voc_2007_test
[10/10 11:21:44 detectron2]: Results for custom_voc_2007_test
[10/10 11:21:44 detectron2]: AP for 0: 2.896947080444079e-06
[10/10 11:21:44 detectron2]: AP for 1: 0.0
[10/10 11:21:44 detectron2]: AP for 2: 7.036307215457782e-05
[10/10 11:21:44 detectron2]: AP for 3: 0.0
[10/10 11:21:45 detectron2]: AP for 4: 0.0
[10/10 11:21:45 detectron2]: AP for 5: 0.0
[10/10 11:21:45 detectron2]: AP for 6: 0.0
[10/10 11:21:45 detectron2]: AP for 7: 0.0
[10/10 11:21:45 detectron2]: AP for 8: 0.0
[10/10 11:21:46 detectron2]: AP for 9: 0.00013395248970482498
[10/10 11:21:46 detectron2]: AP for 10: 0.0
[10/10 11:21:46 detectron2]: AP for 11: 0.0
[10/10 11:21:46 detectron2]: AP for 12: 0.0
[10/10 11:21:47 detectron2]: AP for 13: 6.641517302341526e-06
[10/10 11:21:47 detectron2]: AP for 15: 0.0
[10/10 11:21:47 detectron2]: AP for 16: 0.0
[10/10 11:21:47 detectron2]: AP for 17: 0.0
[10/10 11:21:47 detectron2]: AP for 18: 0.0
[10/10 11:21:47 detectron2]: AP for 19: 0.0
[10/10 11:21:47 detectron2]: mAP: 1.125547532865312e-05
WARNING [10/10 11:21:48 d2.data.datasets.coco]:
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

[10/10 11:21:48 d2.data.datasets.coco]: Loaded 4952 images in COCO format from protocol/custom_protocols/WR1_Mixed_Unknowns.json
[10/10 11:21:48 d2.data.build]: Distribution of instances among all 21 categories:
|  category  | #instances   |  category   | #instances   |  category   | #instances   |
|:----------:|:-------------|:-----------:|:-------------|:-----------:|:-------------|
|  unknown   | 15235        |  aeroplane  | 0            |   bicycle   | 0            |
|    bird    | 0            |    boat     | 0            |   bottle    | 0            |
|    bus     | 0            |     car     | 0            |     cat     | 0            |
|   chair    | 0            |     cow     | 0            | diningtable | 0            |
|    dog     | 0            |    horse    | 0            |  motorbike  | 0            |
|   person   | 0            | pottedplant | 0            |    sheep    | 0            |
|    sofa    | 0            |    train    | 0            |  tvmonitor  | 0            |
|            |              |             |              |             |              |
|   total    | 15235        |             |              |             |              |
[10/10 11:21:48 d2.data.common]: Serializing 4952 elements to byte tensors and concatenating them all ...
[10/10 11:21:48 d2.data.common]: Serialized dataset takes 8.39 MiB
[10/10 11:21:48 d2.data.dataset_mapper]: Augmentations used in training: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[10/10 11:21:48 d2.evaluation.evaluator]: Start inference on 4952 images
[10/10 11:21:50 d2.evaluation.evaluator]: Inference done 11/4952. 0.0778 s / img. ETA=0:06:34
[10/10 11:21:55 d2.evaluation.evaluator]: Inference done 74/4952. 0.0781 s / img. ETA=0:06:32
[10/10 11:22:00 d2.evaluation.evaluator]: Inference done 137/4952. 0.0781 s / img. ETA=0:06:27
[10/10 11:22:05 d2.evaluation.evaluator]: Inference done 199/4952. 0.0783 s / img. ETA=0:06:23
[10/10 11:22:10 d2.evaluation.evaluator]: Inference done 261/4952. 0.0787 s / img. ETA=0:06:19
[10/10 11:22:15 d2.evaluation.evaluator]: Inference done 322/4952. 0.0789 s / img. ETA=0:06:16
[10/10 11:22:20 d2.evaluation.evaluator]: Inference done 384/4952. 0.0790 s / img. ETA=0:06:11
[10/10 11:22:25 d2.evaluation.evaluator]: Inference done 445/4952. 0.0791 s / img. ETA=0:06:07
[10/10 11:22:30 d2.evaluation.evaluator]: Inference done 506/4952. 0.0792 s / img. ETA=0:06:02
[10/10 11:22:35 d2.evaluation.evaluator]: Inference done 567/4952. 0.0793 s / img. ETA=0:05:58
[10/10 11:22:40 d2.evaluation.evaluator]: Inference done 628/4952. 0.0795 s / img. ETA=0:05:53
[10/10 11:22:45 d2.evaluation.evaluator]: Inference done 690/4952. 0.0794 s / img. ETA=0:05:48
[10/10 11:22:50 d2.evaluation.evaluator]: Inference done 752/4952. 0.0794 s / img. ETA=0:05:43
[10/10 11:22:55 d2.evaluation.evaluator]: Inference done 814/4952. 0.0794 s / img. ETA=0:05:38
[10/10 11:23:00 d2.evaluation.evaluator]: Inference done 875/4952. 0.0794 s / img. ETA=0:05:33
[10/10 11:23:05 d2.evaluation.evaluator]: Inference done 937/4952. 0.0794 s / img. ETA=0:05:28
[10/10 11:23:10 d2.evaluation.evaluator]: Inference done 999/4952. 0.0794 s / img. ETA=0:05:23
[10/10 11:23:15 d2.evaluation.evaluator]: Inference done 1061/4952. 0.0794 s / img. ETA=0:05:17
[10/10 11:23:20 d2.evaluation.evaluator]: Inference done 1123/4952. 0.0794 s / img. ETA=0:05:12
[10/10 11:23:26 d2.evaluation.evaluator]: Inference done 1185/4952. 0.0794 s / img. ETA=0:05:07
[10/10 11:23:31 d2.evaluation.evaluator]: Inference done 1246/4952. 0.0794 s / img. ETA=0:05:02
[10/10 11:23:36 d2.evaluation.evaluator]: Inference done 1307/4952. 0.0795 s / img. ETA=0:04:58
[10/10 11:23:41 d2.evaluation.evaluator]: Inference done 1369/4952. 0.0795 s / img. ETA=0:04:53
[10/10 11:23:46 d2.evaluation.evaluator]: Inference done 1431/4952. 0.0794 s / img. ETA=0:04:47
[10/10 11:23:51 d2.evaluation.evaluator]: Inference done 1493/4952. 0.0794 s / img. ETA=0:04:42
[10/10 11:23:56 d2.evaluation.evaluator]: Inference done 1555/4952. 0.0794 s / img. ETA=0:04:37
[10/10 11:24:01 d2.evaluation.evaluator]: Inference done 1616/4952. 0.0794 s / img. ETA=0:04:32
[10/10 11:24:06 d2.evaluation.evaluator]: Inference done 1679/4952. 0.0793 s / img. ETA=0:04:27
[10/10 11:24:11 d2.evaluation.evaluator]: Inference done 1741/4952. 0.0793 s / img. ETA=0:04:22
[10/10 11:24:16 d2.evaluation.evaluator]: Inference done 1803/4952. 0.0793 s / img. ETA=0:04:17
[10/10 11:24:21 d2.evaluation.evaluator]: Inference done 1865/4952. 0.0793 s / img. ETA=0:04:12
[10/10 11:24:26 d2.evaluation.evaluator]: Inference done 1928/4952. 0.0793 s / img. ETA=0:04:06
[10/10 11:24:31 d2.evaluation.evaluator]: Inference done 1989/4952. 0.0793 s / img. ETA=0:04:01
[10/10 11:24:36 d2.evaluation.evaluator]: Inference done 2051/4952. 0.0793 s / img. ETA=0:03:56
[10/10 11:24:41 d2.evaluation.evaluator]: Inference done 2111/4952. 0.0794 s / img. ETA=0:03:52
[10/10 11:24:46 d2.evaluation.evaluator]: Inference done 2173/4952. 0.0794 s / img. ETA=0:03:47
[10/10 11:24:51 d2.evaluation.evaluator]: Inference done 2233/4952. 0.0794 s / img. ETA=0:03:42
[10/10 11:24:56 d2.evaluation.evaluator]: Inference done 2295/4952. 0.0794 s / img. ETA=0:03:37
[10/10 11:25:01 d2.evaluation.evaluator]: Inference done 2356/4952. 0.0795 s / img. ETA=0:03:32
[10/10 11:25:06 d2.evaluation.evaluator]: Inference done 2418/4952. 0.0795 s / img. ETA=0:03:27
[10/10 11:25:12 d2.evaluation.evaluator]: Inference done 2479/4952. 0.0795 s / img. ETA=0:03:22
[10/10 11:25:17 d2.evaluation.evaluator]: Inference done 2541/4952. 0.0795 s / img. ETA=0:03:17
[10/10 11:25:22 d2.evaluation.evaluator]: Inference done 2601/4952. 0.0795 s / img. ETA=0:03:12
[10/10 11:25:27 d2.evaluation.evaluator]: Inference done 2663/4952. 0.0795 s / img. ETA=0:03:07
[10/10 11:25:32 d2.evaluation.evaluator]: Inference done 2725/4952. 0.0795 s / img. ETA=0:03:02
[10/10 11:25:37 d2.evaluation.evaluator]: Inference done 2787/4952. 0.0795 s / img. ETA=0:02:57
[10/10 11:25:42 d2.evaluation.evaluator]: Inference done 2848/4952. 0.0795 s / img. ETA=0:02:52
[10/10 11:25:47 d2.evaluation.evaluator]: Inference done 2909/4952. 0.0795 s / img. ETA=0:02:47
[10/10 11:25:52 d2.evaluation.evaluator]: Inference done 2972/4952. 0.0794 s / img. ETA=0:02:41
[10/10 11:25:57 d2.evaluation.evaluator]: Inference done 3033/4952. 0.0795 s / img. ETA=0:02:36
[10/10 11:26:02 d2.evaluation.evaluator]: Inference done 3094/4952. 0.0795 s / img. ETA=0:02:31
[10/10 11:26:07 d2.evaluation.evaluator]: Inference done 3156/4952. 0.0795 s / img. ETA=0:02:26
[10/10 11:26:12 d2.evaluation.evaluator]: Inference done 3217/4952. 0.0795 s / img. ETA=0:02:21
[10/10 11:26:17 d2.evaluation.evaluator]: Inference done 3278/4952. 0.0795 s / img. ETA=0:02:16
[10/10 11:26:22 d2.evaluation.evaluator]: Inference done 3340/4952. 0.0795 s / img. ETA=0:02:11
[10/10 11:26:27 d2.evaluation.evaluator]: Inference done 3402/4952. 0.0795 s / img. ETA=0:02:06
[10/10 11:26:32 d2.evaluation.evaluator]: Inference done 3465/4952. 0.0795 s / img. ETA=0:02:01
[10/10 11:26:37 d2.evaluation.evaluator]: Inference done 3527/4952. 0.0795 s / img. ETA=0:01:56
[10/10 11:26:42 d2.evaluation.evaluator]: Inference done 3588/4952. 0.0795 s / img. ETA=0:01:51
[10/10 11:26:47 d2.evaluation.evaluator]: Inference done 3649/4952. 0.0795 s / img. ETA=0:01:46
[10/10 11:26:52 d2.evaluation.evaluator]: Inference done 3711/4952. 0.0795 s / img. ETA=0:01:41
[10/10 11:26:57 d2.evaluation.evaluator]: Inference done 3772/4952. 0.0795 s / img. ETA=0:01:36
[10/10 11:27:02 d2.evaluation.evaluator]: Inference done 3833/4952. 0.0795 s / img. ETA=0:01:31
[10/10 11:27:07 d2.evaluation.evaluator]: Inference done 3894/4952. 0.0795 s / img. ETA=0:01:26
[10/10 11:27:12 d2.evaluation.evaluator]: Inference done 3956/4952. 0.0795 s / img. ETA=0:01:21
[10/10 11:27:17 d2.evaluation.evaluator]: Inference done 4017/4952. 0.0795 s / img. ETA=0:01:16
[10/10 11:27:22 d2.evaluation.evaluator]: Inference done 4079/4952. 0.0795 s / img. ETA=0:01:11
[10/10 11:27:27 d2.evaluation.evaluator]: Inference done 4141/4952. 0.0795 s / img. ETA=0:01:06
[10/10 11:27:33 d2.evaluation.evaluator]: Inference done 4202/4952. 0.0795 s / img. ETA=0:01:01
[10/10 11:27:38 d2.evaluation.evaluator]: Inference done 4263/4952. 0.0795 s / img. ETA=0:00:56
[10/10 11:27:43 d2.evaluation.evaluator]: Inference done 4325/4952. 0.0795 s / img. ETA=0:00:51
[10/10 11:27:48 d2.evaluation.evaluator]: Inference done 4387/4952. 0.0795 s / img. ETA=0:00:46
[10/10 11:27:53 d2.evaluation.evaluator]: Inference done 4449/4952. 0.0795 s / img. ETA=0:00:41
[10/10 11:27:58 d2.evaluation.evaluator]: Inference done 4511/4952. 0.0795 s / img. ETA=0:00:36
[10/10 11:28:03 d2.evaluation.evaluator]: Inference done 4572/4952. 0.0795 s / img. ETA=0:00:31
[10/10 11:28:08 d2.evaluation.evaluator]: Inference done 4634/4952. 0.0795 s / img. ETA=0:00:26
[10/10 11:28:13 d2.evaluation.evaluator]: Inference done 4696/4952. 0.0795 s / img. ETA=0:00:20
[10/10 11:28:18 d2.evaluation.evaluator]: Inference done 4758/4952. 0.0795 s / img. ETA=0:00:15
[10/10 11:28:23 d2.evaluation.evaluator]: Inference done 4820/4952. 0.0795 s / img. ETA=0:00:10
[10/10 11:28:28 d2.evaluation.evaluator]: Inference done 4881/4952. 0.0795 s / img. ETA=0:00:05
[10/10 11:28:33 d2.evaluation.evaluator]: Inference done 4942/4952. 0.0795 s / img. ETA=0:00:00
[10/10 11:28:34 d2.evaluation.evaluator]: Total inference time: 0:06:44.715277 (0.081810 s / img per device, on 1 devices)
[10/10 11:28:34 d2.evaluation.evaluator]: Total inference pure compute time: 0:06:33 (0.079462 s / img per device, on 1 devices)
[10/10 11:28:34 detectron2]: Image level evaluation complete for WR1_Mixed_Unknowns
[10/10 11:28:34 detectron2]: Results for WR1_Mixed_Unknowns
[10/10 11:28:34 detectron2]: AP for 0: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  plt.subplots()
[10/10 11:28:34 detectron2]: AP for 1: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  plt.subplots()
[10/10 11:28:35 detectron2]: AP for 2: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  plt.subplots()
[10/10 11:28:35 detectron2]: AP for 3: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  plt.subplots()
[10/10 11:28:35 detectron2]: AP for 4: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  plt.subplots()
[10/10 11:28:35 detectron2]: AP for 5: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  plt.subplots()
[10/10 11:28:35 detectron2]: AP for 6: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  plt.subplots()
[10/10 11:28:36 detectron2]: AP for 7: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  plt.subplots()
[10/10 11:28:36 detectron2]: AP for 8: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  plt.subplots()
[10/10 11:28:36 detectron2]: AP for 9: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  plt.subplots()
[10/10 11:28:36 detectron2]: AP for 10: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  plt.subplots()
[10/10 11:28:36 detectron2]: AP for 11: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  plt.subplots()
[10/10 11:28:37 detectron2]: AP for 12: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  plt.subplots()
[10/10 11:28:37 detectron2]: AP for 13: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  plt.subplots()
[10/10 11:28:37 detectron2]: AP for 15: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  plt.subplots()
[10/10 11:28:37 detectron2]: AP for 16: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  plt.subplots()
[10/10 11:28:37 detectron2]: AP for 17: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  plt.subplots()
[10/10 11:28:38 detectron2]: AP for 18: 0.0
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:10: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  plt.subplots()
[10/10 11:28:38 detectron2]: AP for 19: 0.0
[10/10 11:28:38 detectron2]: mAP: 0.0
[10/10 11:28:38 detectron2]: Combined results for datasets custom_voc_2007_test, WR1_Mixed_Unknowns
[10/10 11:28:38 detectron2]: AP for 0: 1.778590444700967e-06
[10/10 11:28:38 detectron2]: AP for 1: 0.0
[10/10 11:28:38 detectron2]: AP for 2: 4.170141983195208e-05
[10/10 11:28:38 detectron2]: AP for 3: 0.0
[10/10 11:28:38 detectron2]: AP for 4: 0.0
[10/10 11:28:38 detectron2]: AP for 5: 0.0
[10/10 11:28:38 detectron2]: AP for 6: 0.0
[10/10 11:28:38 detectron2]: AP for 7: 0.0
[10/10 11:28:38 detectron2]: AP for 8: 0.0
[10/10 11:28:38 detectron2]: AP for 9: 5.68300238228403e-05
[10/10 11:28:38 detectron2]: AP for 10: 0.0
[10/10 11:28:38 detectron2]: AP for 11: 0.0
[10/10 11:28:38 detectron2]: AP for 12: 0.0
[10/10 11:28:38 detectron2]: AP for 13: 3.277775022070273e-06
[10/10 11:28:38 detectron2]: AP for 15: 0.0
[10/10 11:28:38 detectron2]: AP for 16: 0.0
[10/10 11:28:38 detectron2]: AP for 17: 0.0
[10/10 11:28:38 detectron2]: AP for 18: 0.0
[10/10 11:28:38 detectron2]: AP for 19: 0.0
[10/10 11:28:38 detectron2]: mAP: 5.451990091387415e-06
/home/dksingh/paper_impl/Elephant-of-object-detection/WIC.py:63: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  eval_info['predictions'][k] = np.array([torch.tensor(_).type(torch.FloatTensor).numpy() for _ in eval_info['predictions'][k]])
[10/10 11:28:40 detectron2]: AP for 0.0: 2.896947080444079e-06
[10/10 11:28:40 detectron2]: AP for 1.0: 0.0
[10/10 11:28:40 detectron2]: AP for 2.0: 7.036307215457782e-05
[10/10 11:28:40 detectron2]: AP for 3.0: 0.0
[10/10 11:28:40 detectron2]: AP for 4.0: 0.0
[10/10 11:28:40 detectron2]: AP for 5.0: 0.0
[10/10 11:28:40 detectron2]: AP for 6.0: 0.0
[10/10 11:28:40 detectron2]: AP for 7.0: 0.0
[10/10 11:28:40 detectron2]: AP for 8.0: 0.0
[10/10 11:28:40 detectron2]: AP for 9.0: 0.00013395248970482498
[10/10 11:28:40 detectron2]: AP for 10.0: 0.0
[10/10 11:28:40 detectron2]: AP for 11.0: 0.0
[10/10 11:28:40 detectron2]: AP for 12.0: 0.0
[10/10 11:28:41 detectron2]: AP for 13.0: 6.641517302341526e-06
[10/10 11:28:41 detectron2]: AP for 15.0: 0.0
[10/10 11:28:41 detectron2]: AP for 16.0: 0.0
[10/10 11:28:41 detectron2]: AP for 17.0: 0.0
[10/10 11:28:41 detectron2]: AP for 18.0: 0.0
[10/10 11:28:41 detectron2]: AP for 19.0: 0.0
[10/10 11:28:41 detectron2]: mAP: 1.125547532865312e-05
[10/10 11:28:41 detectron2]: AP for 0.0: 2.735836005740566e-06
[10/10 11:28:41 detectron2]: AP for 1.0: 0.0
[10/10 11:28:41 detectron2]: AP for 2.0: 6.679580110358074e-05
[10/10 11:28:41 detectron2]: AP for 3.0: 0.0
[10/10 11:28:41 detectron2]: AP for 4.0: 0.0
[10/10 11:28:41 detectron2]: AP for 5.0: 0.0
[10/10 11:28:41 detectron2]: AP for 6.0: 0.0
[10/10 11:28:41 detectron2]: AP for 7.0: 0.0
[10/10 11:28:41 detectron2]: AP for 8.0: 0.0
[10/10 11:28:41 detectron2]: AP for 9.0: 0.00012175324809504673
[10/10 11:28:41 detectron2]: AP for 10.0: 0.0
[10/10 11:28:41 detectron2]: AP for 11.0: 0.0
[10/10 11:28:41 detectron2]: AP for 12.0: 0.0
[10/10 11:28:41 detectron2]: AP for 13.0: 6.086168014007853e-06
[10/10 11:28:41 detectron2]: AP for 15.0: 0.0
[10/10 11:28:41 detectron2]: AP for 16.0: 0.0
[10/10 11:28:41 detectron2]: AP for 17.0: 0.0
[10/10 11:28:41 detectron2]: AP for 18.0: 0.0
[10/10 11:28:41 detectron2]: AP for 19.0: 0.0
[10/10 11:28:41 detectron2]: mAP: 1.0387950169388205e-05
[10/10 11:28:42 detectron2]: AP for 0.0: 2.585729816928506e-06
[10/10 11:28:42 detectron2]: AP for 1.0: 0.0
[10/10 11:28:42 detectron2]: AP for 2.0: 6.30000649834983e-05
[10/10 11:28:42 detectron2]: AP for 3.0: 0.0
[10/10 11:28:42 detectron2]: AP for 4.0: 0.0
[10/10 11:28:42 detectron2]: AP for 5.0: 0.0
[10/10 11:28:42 detectron2]: AP for 6.0: 0.0
[10/10 11:28:42 detectron2]: AP for 7.0: 0.0
[10/10 11:28:42 detectron2]: AP for 8.0: 0.0
[10/10 11:28:42 detectron2]: AP for 9.0: 0.00010935335740214214
[10/10 11:28:42 detectron2]: AP for 10.0: 0.0
[10/10 11:28:42 detectron2]: AP for 11.0: 0.0
[10/10 11:28:42 detectron2]: AP for 12.0: 0.0
[10/10 11:28:42 detectron2]: AP for 13.0: 5.587185569311259e-06
[10/10 11:28:42 detectron2]: AP for 15.0: 0.0
[10/10 11:28:42 detectron2]: AP for 16.0: 0.0
[10/10 11:28:42 detectron2]: AP for 17.0: 0.0
[10/10 11:28:42 detectron2]: AP for 18.0: 0.0
[10/10 11:28:42 detectron2]: AP for 19.0: 0.0
[10/10 11:28:42 detectron2]: mAP: 9.501385648036376e-06
[10/10 11:28:42 detectron2]: AP for 0.0: 2.4474111341987737e-06
[10/10 11:28:42 detectron2]: AP for 1.0: 0.0
[10/10 11:28:42 detectron2]: AP for 2.0: 5.934013825026341e-05
[10/10 11:28:42 detectron2]: AP for 3.0: 0.0
[10/10 11:28:42 detectron2]: AP for 4.0: 0.0
[10/10 11:28:42 detectron2]: AP for 5.0: 0.0
[10/10 11:28:42 detectron2]: AP for 6.0: 0.0
[10/10 11:28:43 detectron2]: AP for 7.0: 0.0
[10/10 11:28:43 detectron2]: AP for 8.0: 0.0
[10/10 11:28:43 detectron2]: AP for 9.0: 9.990010585170239e-05
[10/10 11:28:43 detectron2]: AP for 10.0: 0.0
[10/10 11:28:43 detectron2]: AP for 11.0: 0.0
[10/10 11:28:43 detectron2]: AP for 12.0: 0.0
[10/10 11:28:43 detectron2]: AP for 13.0: 5.13871964358259e-06
[10/10 11:28:43 detectron2]: AP for 15.0: 0.0
[10/10 11:28:43 detectron2]: AP for 16.0: 0.0
[10/10 11:28:43 detectron2]: AP for 17.0: 0.0
[10/10 11:28:43 detectron2]: AP for 18.0: 0.0
[10/10 11:28:43 detectron2]: AP for 19.0: 0.0
[10/10 11:28:43 detectron2]: mAP: 8.780335519986693e-06
[10/10 11:28:43 detectron2]: AP for 0.0: 2.3299867280002218e-06
[10/10 11:28:43 detectron2]: AP for 1.0: 0.0
[10/10 11:28:43 detectron2]: AP for 2.0: 5.4797521443106234e-05
[10/10 11:28:43 detectron2]: AP for 3.0: 0.0
[10/10 11:28:43 detectron2]: AP for 4.0: 0.0
[10/10 11:28:43 detectron2]: AP for 5.0: 0.0
[10/10 11:28:43 detectron2]: AP for 6.0: 0.0
[10/10 11:28:43 detectron2]: AP for 7.0: 0.0
[10/10 11:28:44 detectron2]: AP for 8.0: 0.0
[10/10 11:28:44 detectron2]: AP for 9.0: 8.895213977666572e-05
[10/10 11:28:44 detectron2]: AP for 10.0: 0.0
[10/10 11:28:44 detectron2]: AP for 11.0: 0.0
[10/10 11:28:44 detectron2]: AP for 12.0: 0.0
[10/10 11:28:44 detectron2]: AP for 13.0: 4.765626727021299e-06
[10/10 11:28:44 detectron2]: AP for 15.0: 0.0
[10/10 11:28:44 detectron2]: AP for 16.0: 0.0
[10/10 11:28:44 detectron2]: AP for 17.0: 0.0
[10/10 11:28:44 detectron2]: AP for 18.0: 0.0
[10/10 11:28:44 detectron2]: AP for 19.0: 0.0
[10/10 11:28:44 detectron2]: mAP: 7.93922481534537e-06
[10/10 11:28:44 detectron2]: AP for 0.0: 2.2166461803863058e-06
[10/10 11:28:44 detectron2]: AP for 1.0: 0.0
[10/10 11:28:44 detectron2]: AP for 2.0: 5.150656579644419e-05
[10/10 11:28:44 detectron2]: AP for 3.0: 0.0
[10/10 11:28:44 detectron2]: AP for 4.0: 0.0
[10/10 11:28:44 detectron2]: AP for 5.0: 0.0
[10/10 11:28:44 detectron2]: AP for 6.0: 0.0
[10/10 11:28:44 detectron2]: AP for 7.0: 0.0
[10/10 11:28:44 detectron2]: AP for 8.0: 0.0
[10/10 11:28:44 detectron2]: AP for 9.0: 8.148409688146785e-05
[10/10 11:28:45 detectron2]: AP for 10.0: 0.0
[10/10 11:28:45 detectron2]: AP for 11.0: 0.0
[10/10 11:28:45 detectron2]: AP for 12.0: 0.0
[10/10 11:28:45 detectron2]: AP for 13.0: 4.396841177367605e-06
[10/10 11:28:45 detectron2]: AP for 15.0: 0.0
[10/10 11:28:45 detectron2]: AP for 16.0: 0.0
[10/10 11:28:45 detectron2]: AP for 17.0: 0.0
[10/10 11:28:45 detectron2]: AP for 18.0: 0.0
[10/10 11:28:45 detectron2]: AP for 19.0: 0.0
[10/10 11:28:45 detectron2]: mAP: 7.347587597905658e-06
[10/10 11:28:45 detectron2]: AP for 0.0: 2.1033060875197407e-06
[10/10 11:28:45 detectron2]: AP for 1.0: 0.0
[10/10 11:28:45 detectron2]: AP for 2.0: 4.9380279961042106e-05
[10/10 11:28:45 detectron2]: AP for 3.0: 0.0
[10/10 11:28:45 detectron2]: AP for 4.0: 0.0
[10/10 11:28:45 detectron2]: AP for 5.0: 0.0
[10/10 11:28:45 detectron2]: AP for 6.0: 0.0
[10/10 11:28:45 detectron2]: AP for 7.0: 0.0
[10/10 11:28:45 detectron2]: AP for 8.0: 0.0
[10/10 11:28:45 detectron2]: AP for 9.0: 7.429236575262621e-05
[10/10 11:28:46 detectron2]: AP for 10.0: 0.0
[10/10 11:28:46 detectron2]: AP for 11.0: 0.0
[10/10 11:28:46 detectron2]: AP for 12.0: 0.0
[10/10 11:28:46 detectron2]: AP for 13.0: 4.0986965359479655e-06
[10/10 11:28:46 detectron2]: AP for 15.0: 0.0
[10/10 11:28:46 detectron2]: AP for 16.0: 0.0
[10/10 11:28:46 detectron2]: AP for 17.0: 0.0
[10/10 11:28:46 detectron2]: AP for 18.0: 0.0
[10/10 11:28:46 detectron2]: AP for 19.0: 0.0
[10/10 11:28:46 detectron2]: mAP: 6.835507974756183e-06
[10/10 11:28:46 detectron2]: AP for 0.0: 2.0148736439296044e-06
[10/10 11:28:46 detectron2]: AP for 1.0: 0.0
[10/10 11:28:47 detectron2]: AP for 2.0: 4.75465931231156e-05
[10/10 11:28:47 detectron2]: AP for 3.0: 0.0
[10/10 11:28:47 detectron2]: AP for 4.0: 0.0
[10/10 11:28:47 detectron2]: AP for 5.0: 0.0
[10/10 11:28:47 detectron2]: AP for 6.0: 0.0
[10/10 11:28:47 detectron2]: AP for 7.0: 0.0
[10/10 11:28:47 detectron2]: AP for 8.0: 0.0
[10/10 11:28:47 detectron2]: AP for 9.0: 6.867975025670603e-05
[10/10 11:28:47 detectron2]: AP for 10.0: 0.0
[10/10 11:28:47 detectron2]: AP for 11.0: 0.0
[10/10 11:28:47 detectron2]: AP for 12.0: 0.0
[10/10 11:28:47 detectron2]: AP for 13.0: 3.874572485074168e-06
[10/10 11:28:47 detectron2]: AP for 15.0: 0.0
[10/10 11:28:47 detectron2]: AP for 16.0: 0.0
[10/10 11:28:47 detectron2]: AP for 17.0: 0.0
[10/10 11:28:47 detectron2]: AP for 18.0: 0.0
[10/10 11:28:47 detectron2]: AP for 19.0: 0.0
[10/10 11:28:47 detectron2]: mAP: 6.427146672649542e-06
[10/10 11:28:48 detectron2]: AP for 0.0: 1.926529876072891e-06
[10/10 11:28:48 detectron2]: AP for 1.0: 0.0
[10/10 11:28:48 detectron2]: AP for 2.0: 4.56370908068493e-05
[10/10 11:28:48 detectron2]: AP for 3.0: 0.0
[10/10 11:28:48 detectron2]: AP for 4.0: 0.0
[10/10 11:28:48 detectron2]: AP for 5.0: 0.0
[10/10 11:28:48 detectron2]: AP for 6.0: 0.0
[10/10 11:28:48 detectron2]: AP for 7.0: 0.0
[10/10 11:28:48 detectron2]: AP for 8.0: 0.0
[10/10 11:28:48 detectron2]: AP for 9.0: 6.498148286482319e-05
[10/10 11:28:48 detectron2]: AP for 10.0: 0.0
[10/10 11:28:48 detectron2]: AP for 11.0: 0.0
[10/10 11:28:48 detectron2]: AP for 12.0: 0.0
[10/10 11:28:48 detectron2]: AP for 13.0: 3.6445273963181535e-06
[10/10 11:28:48 detectron2]: AP for 15.0: 0.0
[10/10 11:28:48 detectron2]: AP for 16.0: 0.0
[10/10 11:28:48 detectron2]: AP for 17.0: 0.0
[10/10 11:28:48 detectron2]: AP for 18.0: 0.0
[10/10 11:28:48 detectron2]: AP for 19.0: 0.0
[10/10 11:28:48 detectron2]: mAP: 6.115243650128832e-06
[10/10 11:28:49 detectron2]: AP for 0.0: 1.848534793680301e-06
[10/10 11:28:49 detectron2]: AP for 1.0: 0.0
[10/10 11:28:49 detectron2]: AP for 2.0: 4.347637150203809e-05
[10/10 11:28:49 detectron2]: AP for 3.0: 0.0
[10/10 11:28:49 detectron2]: AP for 4.0: 0.0
[10/10 11:28:49 detectron2]: AP for 5.0: 0.0
[10/10 11:28:49 detectron2]: AP for 6.0: 0.0
[10/10 11:28:49 detectron2]: AP for 7.0: 0.0
[10/10 11:28:49 detectron2]: AP for 8.0: 0.0
[10/10 11:28:49 detectron2]: AP for 9.0: 6.008532363921404e-05
[10/10 11:28:49 detectron2]: AP for 10.0: 0.0
[10/10 11:28:49 detectron2]: AP for 11.0: 0.0
[10/10 11:28:49 detectron2]: AP for 12.0: 0.0
[10/10 11:28:50 detectron2]: AP for 13.0: 3.4480974591133418e-06
[10/10 11:28:50 detectron2]: AP for 15.0: 0.0
[10/10 11:28:50 detectron2]: AP for 16.0: 0.0
[10/10 11:28:50 detectron2]: AP for 17.0: 0.0
[10/10 11:28:50 detectron2]: AP for 18.0: 0.0
[10/10 11:28:50 detectron2]: AP for 19.0: 0.0
[10/10 11:28:50 detectron2]: mAP: 5.729385975428158e-06
[10/10 11:28:50 detectron2]: AP for 0.0: 1.778590444700967e-06
[10/10 11:28:50 detectron2]: AP for 1.0: 0.0
[10/10 11:28:50 detectron2]: AP for 2.0: 4.170141983195208e-05
[10/10 11:28:50 detectron2]: AP for 3.0: 0.0
[10/10 11:28:50 detectron2]: AP for 4.0: 0.0
[10/10 11:28:50 detectron2]: AP for 5.0: 0.0
[10/10 11:28:50 detectron2]: AP for 6.0: 0.0
[10/10 11:28:50 detectron2]: AP for 7.0: 0.0
[10/10 11:28:50 detectron2]: AP for 8.0: 0.0
[10/10 11:28:50 detectron2]: AP for 9.0: 5.68300238228403e-05
[10/10 11:28:51 detectron2]: AP for 10.0: 0.0
[10/10 11:28:51 detectron2]: AP for 11.0: 0.0
[10/10 11:28:51 detectron2]: AP for 12.0: 0.0
[10/10 11:28:51 detectron2]: AP for 13.0: 3.277775022070273e-06
[10/10 11:28:51 detectron2]: AP for 15.0: 0.0
[10/10 11:28:51 detectron2]: AP for 16.0: 0.0
[10/10 11:28:51 detectron2]: AP for 17.0: 0.0
[10/10 11:28:51 detectron2]: AP for 18.0: 0.0
[10/10 11:28:51 detectron2]: AP for 19.0: 0.0
[10/10 11:28:51 detectron2]: mAP: 5.451990091387415e-06

Could you explain a little about how to infer these numbers, and most of them are nearly 0. There are new files created under PR/ and all of them have 0.0% across the Precision vs Recall graph.

akshay-raj-dhamija commented 3 years ago

Hi, It looks like you aren't using a trained model, please refer to the updated README. Also, the logging messages have been updated to better explain the values. Thanks!

deepaksinghcv commented 3 years ago

Hello Akshay, I trained the model as mentioned in the updated README.

  1. The evaluation does not happen for multi GPU yet again.
  2. I performed the evaluation on 1 GPU and I notice that, the first test evaluation happens for custom_voc_2007_test. Then evaluation happens for WR1_Mixed_Unknowns. In the logs, I notice that there are multiple sequences of evaluation after evaluating WR1_Mixed_Unknowns. Could you explain how to understand the 2nd evaluation values?

Log:

<starting log cropped>
[11/05 16:03:58 fvcore.common.checkpoint]: Loading checkpoint from ./output/model_final.pth
WARNING [11/05 16:03:58 d2.data.datasets.coco]: 
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

[11/05 16:03:58 d2.data.datasets.coco]: Loaded 4952 images in COCO format from protocol/custom_protocols/custom_voc_2007_test.json
[11/05 16:03:59 d2.data.build]: Distribution of instances among all 21 categories:
|  category  | #instances   |  category   | #instances   |  category   | #instances   |
|:----------:|:-------------|:-----------:|:-------------|:-----------:|:-------------|
|  unknown   | 0            |  aeroplane  | 311          |   bicycle   | 389          |
|    bird    | 576          |    boat     | 393          |   bottle    | 657          |
|    bus     | 254          |     car     | 1541         |     cat     | 370          |
|   chair    | 1374         |     cow     | 329          | diningtable | 299          |
|    dog     | 530          |    horse    | 395          |  motorbike  | 369          |
|   person   | 5227         | pottedplant | 592          |    sheep    | 311          |
|    sofa    | 396          |    train    | 302          |  tvmonitor  | 361          |
|            |              |             |              |             |              |
|   total    | 14976        |             |              |             |              |
[11/05 16:03:59 d2.data.common]: Serializing 4952 elements to byte tensors and concatenating them all ...
[11/05 16:03:59 d2.data.common]: Serialized dataset takes 1.87 MiB
[11/05 16:03:59 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[11/05 16:03:59 d2.evaluation.evaluator]: Start inference on 4952 images
[11/05 16:04:00 d2.evaluation.evaluator]: Inference done 11/4952. 0.0777 s / img. ETA=0:06:33
[11/05 16:04:05 d2.evaluation.evaluator]: Inference done 75/4952. 0.0772 s / img. ETA=0:06:26
[11/05 16:04:10 d2.evaluation.evaluator]: Inference done 138/4952. 0.0775 s / img. ETA=0:06:23
[11/05 16:04:15 d2.evaluation.evaluator]: Inference done 202/4952. 0.0773 s / img. ETA=0:06:17
[11/05 16:04:21 d2.evaluation.evaluator]: Inference done 265/4952. 0.0774 s / img. ETA=0:06:12
[11/05 16:04:26 d2.evaluation.evaluator]: Inference done 328/4952. 0.0774 s / img. ETA=0:06:07
[11/05 16:04:31 d2.evaluation.evaluator]: Inference done 390/4952. 0.0776 s / img. ETA=0:06:03
[11/05 16:04:36 d2.evaluation.evaluator]: Inference done 453/4952. 0.0776 s / img. ETA=0:05:58
[11/05 16:04:41 d2.evaluation.evaluator]: Inference done 516/4952. 0.0776 s / img. ETA=0:05:53
[11/05 16:04:46 d2.evaluation.evaluator]: Inference done 579/4952. 0.0776 s / img. ETA=0:05:48
[11/05 16:04:51 d2.evaluation.evaluator]: Inference done 642/4952. 0.0776 s / img. ETA=0:05:43
[11/05 16:04:56 d2.evaluation.evaluator]: Inference done 705/4952. 0.0776 s / img. ETA=0:05:38
[11/05 16:05:01 d2.evaluation.evaluator]: Inference done 767/4952. 0.0777 s / img. ETA=0:05:34
[11/05 16:05:06 d2.evaluation.evaluator]: Inference done 830/4952. 0.0777 s / img. ETA=0:05:29
[11/05 16:05:11 d2.evaluation.evaluator]: Inference done 892/4952. 0.0778 s / img. ETA=0:05:24
[11/05 16:05:16 d2.evaluation.evaluator]: Inference done 955/4952. 0.0778 s / img. ETA=0:05:19
[11/05 16:05:21 d2.evaluation.evaluator]: Inference done 1018/4952. 0.0779 s / img. ETA=0:05:14
[11/05 16:05:26 d2.evaluation.evaluator]: Inference done 1080/4952. 0.0779 s / img. ETA=0:05:09
[11/05 16:05:31 d2.evaluation.evaluator]: Inference done 1143/4952. 0.0779 s / img. ETA=0:05:04
[11/05 16:05:36 d2.evaluation.evaluator]: Inference done 1205/4952. 0.0780 s / img. ETA=0:05:00
[11/05 16:05:41 d2.evaluation.evaluator]: Inference done 1268/4952. 0.0780 s / img. ETA=0:04:55
[11/05 16:05:46 d2.evaluation.evaluator]: Inference done 1331/4952. 0.0780 s / img. ETA=0:04:50
[11/05 16:05:51 d2.evaluation.evaluator]: Inference done 1394/4952. 0.0780 s / img. ETA=0:04:45
[11/05 16:05:56 d2.evaluation.evaluator]: Inference done 1456/4952. 0.0780 s / img. ETA=0:04:40
[11/05 16:06:01 d2.evaluation.evaluator]: Inference done 1519/4952. 0.0780 s / img. ETA=0:04:35
[11/05 16:06:06 d2.evaluation.evaluator]: Inference done 1582/4952. 0.0781 s / img. ETA=0:04:30
[11/05 16:06:11 d2.evaluation.evaluator]: Inference done 1645/4952. 0.0781 s / img. ETA=0:04:25
[11/05 16:06:16 d2.evaluation.evaluator]: Inference done 1708/4952. 0.0781 s / img. ETA=0:04:20
[11/05 16:06:21 d2.evaluation.evaluator]: Inference done 1771/4952. 0.0781 s / img. ETA=0:04:15
[11/05 16:06:26 d2.evaluation.evaluator]: Inference done 1834/4952. 0.0781 s / img. ETA=0:04:09
[11/05 16:06:32 d2.evaluation.evaluator]: Inference done 1897/4952. 0.0781 s / img. ETA=0:04:04
[11/05 16:06:37 d2.evaluation.evaluator]: Inference done 1960/4952. 0.0781 s / img. ETA=0:03:59
[11/05 16:06:42 d2.evaluation.evaluator]: Inference done 2024/4952. 0.0780 s / img. ETA=0:03:54
[11/05 16:06:47 d2.evaluation.evaluator]: Inference done 2086/4952. 0.0781 s / img. ETA=0:03:49
[11/05 16:06:52 d2.evaluation.evaluator]: Inference done 2149/4952. 0.0781 s / img. ETA=0:03:44
[11/05 16:06:57 d2.evaluation.evaluator]: Inference done 2212/4952. 0.0781 s / img. ETA=0:03:39
[11/05 16:07:02 d2.evaluation.evaluator]: Inference done 2275/4952. 0.0781 s / img. ETA=0:03:34
[11/05 16:07:07 d2.evaluation.evaluator]: Inference done 2338/4952. 0.0781 s / img. ETA=0:03:29
[11/05 16:07:12 d2.evaluation.evaluator]: Inference done 2401/4952. 0.0781 s / img. ETA=0:03:24
[11/05 16:07:17 d2.evaluation.evaluator]: Inference done 2464/4952. 0.0781 s / img. ETA=0:03:19
[11/05 16:07:22 d2.evaluation.evaluator]: Inference done 2526/4952. 0.0781 s / img. ETA=0:03:14
[11/05 16:07:27 d2.evaluation.evaluator]: Inference done 2588/4952. 0.0781 s / img. ETA=0:03:09
[11/05 16:07:32 d2.evaluation.evaluator]: Inference done 2651/4952. 0.0781 s / img. ETA=0:03:04
[11/05 16:07:37 d2.evaluation.evaluator]: Inference done 2713/4952. 0.0782 s / img. ETA=0:02:59
[11/05 16:07:42 d2.evaluation.evaluator]: Inference done 2775/4952. 0.0782 s / img. ETA=0:02:54
[11/05 16:07:47 d2.evaluation.evaluator]: Inference done 2838/4952. 0.0782 s / img. ETA=0:02:49
[11/05 16:07:52 d2.evaluation.evaluator]: Inference done 2902/4952. 0.0782 s / img. ETA=0:02:44
[11/05 16:07:57 d2.evaluation.evaluator]: Inference done 2964/4952. 0.0782 s / img. ETA=0:02:39
[11/05 16:08:02 d2.evaluation.evaluator]: Inference done 3025/4952. 0.0782 s / img. ETA=0:02:34
[11/05 16:08:08 d2.evaluation.evaluator]: Inference done 3088/4952. 0.0782 s / img. ETA=0:02:29
[11/05 16:08:13 d2.evaluation.evaluator]: Inference done 3150/4952. 0.0783 s / img. ETA=0:02:24
[11/05 16:08:18 d2.evaluation.evaluator]: Inference done 3212/4952. 0.0783 s / img. ETA=0:02:19
[11/05 16:08:23 d2.evaluation.evaluator]: Inference done 3274/4952. 0.0783 s / img. ETA=0:02:14
[11/05 16:08:28 d2.evaluation.evaluator]: Inference done 3336/4952. 0.0783 s / img. ETA=0:02:09
[11/05 16:08:33 d2.evaluation.evaluator]: Inference done 3398/4952. 0.0783 s / img. ETA=0:02:04
[11/05 16:08:38 d2.evaluation.evaluator]: Inference done 3461/4952. 0.0783 s / img. ETA=0:01:59
[11/05 16:08:43 d2.evaluation.evaluator]: Inference done 3522/4952. 0.0783 s / img. ETA=0:01:55
[11/05 16:08:48 d2.evaluation.evaluator]: Inference done 3584/4952. 0.0784 s / img. ETA=0:01:50
[11/05 16:08:53 d2.evaluation.evaluator]: Inference done 3646/4952. 0.0784 s / img. ETA=0:01:45
[11/05 16:08:58 d2.evaluation.evaluator]: Inference done 3708/4952. 0.0784 s / img. ETA=0:01:40
[11/05 16:09:03 d2.evaluation.evaluator]: Inference done 3770/4952. 0.0784 s / img. ETA=0:01:35
[11/05 16:09:08 d2.evaluation.evaluator]: Inference done 3833/4952. 0.0784 s / img. ETA=0:01:30
[11/05 16:09:13 d2.evaluation.evaluator]: Inference done 3895/4952. 0.0784 s / img. ETA=0:01:25
[11/05 16:09:18 d2.evaluation.evaluator]: Inference done 3958/4952. 0.0784 s / img. ETA=0:01:20
[11/05 16:09:23 d2.evaluation.evaluator]: Inference done 4019/4952. 0.0784 s / img. ETA=0:01:15
[11/05 16:09:28 d2.evaluation.evaluator]: Inference done 4081/4952. 0.0784 s / img. ETA=0:01:10
[11/05 16:09:33 d2.evaluation.evaluator]: Inference done 4143/4952. 0.0784 s / img. ETA=0:01:05
[11/05 16:09:38 d2.evaluation.evaluator]: Inference done 4205/4952. 0.0784 s / img. ETA=0:01:00
[11/05 16:09:43 d2.evaluation.evaluator]: Inference done 4266/4952. 0.0785 s / img. ETA=0:00:55
[11/05 16:09:48 d2.evaluation.evaluator]: Inference done 4328/4952. 0.0785 s / img. ETA=0:00:50
[11/05 16:09:53 d2.evaluation.evaluator]: Inference done 4389/4952. 0.0785 s / img. ETA=0:00:45
[11/05 16:09:58 d2.evaluation.evaluator]: Inference done 4452/4952. 0.0785 s / img. ETA=0:00:40
[11/05 16:10:03 d2.evaluation.evaluator]: Inference done 4514/4952. 0.0785 s / img. ETA=0:00:35
[11/05 16:10:08 d2.evaluation.evaluator]: Inference done 4576/4952. 0.0785 s / img. ETA=0:00:30
[11/05 16:10:13 d2.evaluation.evaluator]: Inference done 4638/4952. 0.0785 s / img. ETA=0:00:25
[11/05 16:10:18 d2.evaluation.evaluator]: Inference done 4699/4952. 0.0786 s / img. ETA=0:00:20
[11/05 16:10:23 d2.evaluation.evaluator]: Inference done 4761/4952. 0.0786 s / img. ETA=0:00:15
[11/05 16:10:29 d2.evaluation.evaluator]: Inference done 4823/4952. 0.0786 s / img. ETA=0:00:10
[11/05 16:10:34 d2.evaluation.evaluator]: Inference done 4886/4952. 0.0786 s / img. ETA=0:00:05
[11/05 16:10:39 d2.evaluation.evaluator]: Inference done 4948/4952. 0.0786 s / img. ETA=0:00:00
[11/05 16:10:39 d2.evaluation.evaluator]: Total inference time: 0:06:39.141057 (0.080683 s / img per device, on 1 devices)
[11/05 16:10:39 d2.evaluation.evaluator]: Total inference pure compute time: 0:06:28 (0.078568 s / img per device, on 1 devices)
[11/05 16:10:39 detectron2]: Image level evaluation complete for custom_voc_2007_test
[11/05 16:10:39 detectron2]: Results for custom_voc_2007_test
[11/05 16:10:39 detectron2]: AP for 0: 0.808440625667572
[11/05 16:10:39 detectron2]: AP for 1: 0.7770674228668213
[11/05 16:10:40 detectron2]: AP for 2: 0.7009527683258057
[11/05 16:10:40 detectron2]: AP for 3: 0.5980304479598999
[11/05 16:10:40 detectron2]: AP for 4: 0.5711202621459961
[11/05 16:10:40 detectron2]: AP for 5: 0.7750421762466431
[11/05 16:10:40 detectron2]: AP for 6: 0.7938251495361328
[11/05 16:10:41 detectron2]: AP for 7: 0.88051837682724
[11/05 16:10:41 detectron2]: AP for 8: 0.5696967840194702
[11/05 16:10:41 detectron2]: AP for 9: 0.7881757616996765
[11/05 16:10:41 detectron2]: AP for 10: 0.6557427048683167
[11/05 16:10:41 detectron2]: AP for 11: 0.8663510680198669
[11/05 16:10:41 detectron2]: AP for 12: 0.8503947854042053
[11/05 16:10:41 detectron2]: AP for 13: 0.8271569609642029
[11/05 16:10:42 detectron2]: AP for 14: 0.788679838180542
[11/05 16:10:42 detectron2]: AP for 15: 0.4945039451122284
[11/05 16:10:42 detectron2]: AP for 16: 0.6859263181686401
[11/05 16:10:42 detectron2]: AP for 17: 0.6699324250221252
[11/05 16:10:42 detectron2]: AP for 18: 0.8395748734474182
[11/05 16:10:43 detectron2]: AP for 19: 0.7555124163627625
[11/05 16:10:43 detectron2]: mAP: 0.7348322868347168
WARNING [11/05 16:10:43 d2.data.datasets.coco]: 
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

[11/05 16:10:43 d2.data.datasets.coco]: Loaded 4952 images in COCO format from protocol/custom_protocols/WR1_Mixed_Unknowns.json
[11/05 16:10:43 d2.data.build]: Distribution of instances among all 21 categories:
|  category  | #instances   |  category   | #instances   |  category   | #instances   |
|:----------:|:-------------|:-----------:|:-------------|:-----------:|:-------------|
|  unknown   | 15235        |  aeroplane  | 0            |   bicycle   | 0            |
|    bird    | 0            |    boat     | 0            |   bottle    | 0            |
|    bus     | 0            |     car     | 0            |     cat     | 0            |
|   chair    | 0            |     cow     | 0            | diningtable | 0            |
|    dog     | 0            |    horse    | 0            |  motorbike  | 0            |
|   person   | 0            | pottedplant | 0            |    sheep    | 0            |
|    sofa    | 0            |    train    | 0            |  tvmonitor  | 0            |
|            |              |             |              |             |              |
|   total    | 15235        |             |              |             |              |
[11/05 16:10:43 d2.data.common]: Serializing 4952 elements to byte tensors and concatenating them all ...
[11/05 16:10:43 d2.data.common]: Serialized dataset takes 8.39 MiB
[11/05 16:10:43 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[11/05 16:10:43 d2.evaluation.evaluator]: Start inference on 4952 images
[11/05 16:10:45 d2.evaluation.evaluator]: Inference done 11/4952. 0.0794 s / img. ETA=0:06:40
[11/05 16:10:50 d2.evaluation.evaluator]: Inference done 74/4952. 0.0786 s / img. ETA=0:06:33
[11/05 16:10:55 d2.evaluation.evaluator]: Inference done 136/4952. 0.0788 s / img. ETA=0:06:28
[11/05 16:11:00 d2.evaluation.evaluator]: Inference done 198/4952. 0.0790 s / img. ETA=0:06:24
[11/05 16:11:05 d2.evaluation.evaluator]: Inference done 260/4952. 0.0791 s / img. ETA=0:06:20
[11/05 16:11:10 d2.evaluation.evaluator]: Inference done 321/4952. 0.0794 s / img. ETA=0:06:16
[11/05 16:11:15 d2.evaluation.evaluator]: Inference done 383/4952. 0.0794 s / img. ETA=0:06:11
[11/05 16:11:20 d2.evaluation.evaluator]: Inference done 445/4952. 0.0793 s / img. ETA=0:06:06
[11/05 16:11:25 d2.evaluation.evaluator]: Inference done 506/4952. 0.0795 s / img. ETA=0:06:01
[11/05 16:11:30 d2.evaluation.evaluator]: Inference done 567/4952. 0.0795 s / img. ETA=0:05:57
[11/05 16:11:35 d2.evaluation.evaluator]: Inference done 629/4952. 0.0796 s / img. ETA=0:05:52
[11/05 16:11:40 d2.evaluation.evaluator]: Inference done 691/4952. 0.0796 s / img. ETA=0:05:47
[11/05 16:11:45 d2.evaluation.evaluator]: Inference done 752/4952. 0.0796 s / img. ETA=0:05:42
[11/05 16:11:50 d2.evaluation.evaluator]: Inference done 815/4952. 0.0795 s / img. ETA=0:05:37
[11/05 16:11:55 d2.evaluation.evaluator]: Inference done 877/4952. 0.0795 s / img. ETA=0:05:32
[11/05 16:12:00 d2.evaluation.evaluator]: Inference done 938/4952. 0.0796 s / img. ETA=0:05:27
[11/05 16:12:05 d2.evaluation.evaluator]: Inference done 999/4952. 0.0796 s / img. ETA=0:05:22
[11/05 16:12:10 d2.evaluation.evaluator]: Inference done 1061/4952. 0.0796 s / img. ETA=0:05:17
[11/05 16:12:15 d2.evaluation.evaluator]: Inference done 1122/4952. 0.0797 s / img. ETA=0:05:12
[11/05 16:12:20 d2.evaluation.evaluator]: Inference done 1184/4952. 0.0796 s / img. ETA=0:05:07
[11/05 16:12:26 d2.evaluation.evaluator]: Inference done 1246/4952. 0.0796 s / img. ETA=0:05:02
[11/05 16:12:31 d2.evaluation.evaluator]: Inference done 1306/4952. 0.0797 s / img. ETA=0:04:57
[11/05 16:12:36 d2.evaluation.evaluator]: Inference done 1366/4952. 0.0798 s / img. ETA=0:04:53
[11/05 16:12:41 d2.evaluation.evaluator]: Inference done 1428/4952. 0.0798 s / img. ETA=0:04:48
[11/05 16:12:46 d2.evaluation.evaluator]: Inference done 1490/4952. 0.0798 s / img. ETA=0:04:43
[11/05 16:12:51 d2.evaluation.evaluator]: Inference done 1552/4952. 0.0798 s / img. ETA=0:04:38
[11/05 16:12:56 d2.evaluation.evaluator]: Inference done 1613/4952. 0.0798 s / img. ETA=0:04:33
[11/05 16:13:01 d2.evaluation.evaluator]: Inference done 1675/4952. 0.0798 s / img. ETA=0:04:28
[11/05 16:13:06 d2.evaluation.evaluator]: Inference done 1737/4952. 0.0798 s / img. ETA=0:04:22
[11/05 16:13:11 d2.evaluation.evaluator]: Inference done 1798/4952. 0.0798 s / img. ETA=0:04:18
[11/05 16:13:16 d2.evaluation.evaluator]: Inference done 1861/4952. 0.0798 s / img. ETA=0:04:12
[11/05 16:13:21 d2.evaluation.evaluator]: Inference done 1923/4952. 0.0798 s / img. ETA=0:04:07
[11/05 16:13:26 d2.evaluation.evaluator]: Inference done 1984/4952. 0.0798 s / img. ETA=0:04:02
[11/05 16:13:31 d2.evaluation.evaluator]: Inference done 2044/4952. 0.0799 s / img. ETA=0:03:58
[11/05 16:13:36 d2.evaluation.evaluator]: Inference done 2105/4952. 0.0799 s / img. ETA=0:03:53
[11/05 16:13:41 d2.evaluation.evaluator]: Inference done 2167/4952. 0.0799 s / img. ETA=0:03:48
[11/05 16:13:46 d2.evaluation.evaluator]: Inference done 2228/4952. 0.0799 s / img. ETA=0:03:43
[11/05 16:13:51 d2.evaluation.evaluator]: Inference done 2289/4952. 0.0799 s / img. ETA=0:03:38
[11/05 16:13:56 d2.evaluation.evaluator]: Inference done 2350/4952. 0.0799 s / img. ETA=0:03:33
[11/05 16:14:01 d2.evaluation.evaluator]: Inference done 2411/4952. 0.0800 s / img. ETA=0:03:28
[11/05 16:14:06 d2.evaluation.evaluator]: Inference done 2472/4952. 0.0800 s / img. ETA=0:03:23
[11/05 16:14:12 d2.evaluation.evaluator]: Inference done 2534/4952. 0.0800 s / img. ETA=0:03:18
[11/05 16:14:17 d2.evaluation.evaluator]: Inference done 2595/4952. 0.0800 s / img. ETA=0:03:13
[11/05 16:14:22 d2.evaluation.evaluator]: Inference done 2657/4952. 0.0800 s / img. ETA=0:03:08
[11/05 16:14:27 d2.evaluation.evaluator]: Inference done 2718/4952. 0.0800 s / img. ETA=0:03:03
[11/05 16:14:32 d2.evaluation.evaluator]: Inference done 2780/4952. 0.0800 s / img. ETA=0:02:58
[11/05 16:14:37 d2.evaluation.evaluator]: Inference done 2842/4952. 0.0800 s / img. ETA=0:02:53
[11/05 16:14:42 d2.evaluation.evaluator]: Inference done 2904/4952. 0.0800 s / img. ETA=0:02:47
[11/05 16:14:47 d2.evaluation.evaluator]: Inference done 2966/4952. 0.0800 s / img. ETA=0:02:42
[11/05 16:14:52 d2.evaluation.evaluator]: Inference done 3028/4952. 0.0800 s / img. ETA=0:02:37
[11/05 16:14:57 d2.evaluation.evaluator]: Inference done 3090/4952. 0.0800 s / img. ETA=0:02:32
[11/05 16:15:02 d2.evaluation.evaluator]: Inference done 3152/4952. 0.0800 s / img. ETA=0:02:27
[11/05 16:15:07 d2.evaluation.evaluator]: Inference done 3214/4952. 0.0799 s / img. ETA=0:02:22
[11/05 16:15:12 d2.evaluation.evaluator]: Inference done 3275/4952. 0.0800 s / img. ETA=0:02:17
[11/05 16:15:17 d2.evaluation.evaluator]: Inference done 3336/4952. 0.0800 s / img. ETA=0:02:12
[11/05 16:15:22 d2.evaluation.evaluator]: Inference done 3397/4952. 0.0800 s / img. ETA=0:02:07
[11/05 16:15:27 d2.evaluation.evaluator]: Inference done 3460/4952. 0.0799 s / img. ETA=0:02:02
[11/05 16:15:32 d2.evaluation.evaluator]: Inference done 3521/4952. 0.0799 s / img. ETA=0:01:57
[11/05 16:15:37 d2.evaluation.evaluator]: Inference done 3582/4952. 0.0800 s / img. ETA=0:01:52
[11/05 16:15:42 d2.evaluation.evaluator]: Inference done 3643/4952. 0.0800 s / img. ETA=0:01:47
[11/05 16:15:47 d2.evaluation.evaluator]: Inference done 3705/4952. 0.0800 s / img. ETA=0:01:42
[11/05 16:15:52 d2.evaluation.evaluator]: Inference done 3767/4952. 0.0800 s / img. ETA=0:01:37
[11/05 16:15:58 d2.evaluation.evaluator]: Inference done 3828/4952. 0.0800 s / img. ETA=0:01:32
[11/05 16:16:03 d2.evaluation.evaluator]: Inference done 3889/4952. 0.0800 s / img. ETA=0:01:27
[11/05 16:16:08 d2.evaluation.evaluator]: Inference done 3950/4952. 0.0800 s / img. ETA=0:01:22
[11/05 16:16:13 d2.evaluation.evaluator]: Inference done 4012/4952. 0.0800 s / img. ETA=0:01:17
[11/05 16:16:18 d2.evaluation.evaluator]: Inference done 4073/4952. 0.0800 s / img. ETA=0:01:12
[11/05 16:16:23 d2.evaluation.evaluator]: Inference done 4134/4952. 0.0800 s / img. ETA=0:01:07
[11/05 16:16:28 d2.evaluation.evaluator]: Inference done 4196/4952. 0.0800 s / img. ETA=0:01:01
[11/05 16:16:33 d2.evaluation.evaluator]: Inference done 4256/4952. 0.0800 s / img. ETA=0:00:57
[11/05 16:16:38 d2.evaluation.evaluator]: Inference done 4318/4952. 0.0800 s / img. ETA=0:00:51
[11/05 16:16:43 d2.evaluation.evaluator]: Inference done 4380/4952. 0.0800 s / img. ETA=0:00:46
[11/05 16:16:48 d2.evaluation.evaluator]: Inference done 4442/4952. 0.0800 s / img. ETA=0:00:41
[11/05 16:16:53 d2.evaluation.evaluator]: Inference done 4503/4952. 0.0800 s / img. ETA=0:00:36
[11/05 16:16:58 d2.evaluation.evaluator]: Inference done 4564/4952. 0.0800 s / img. ETA=0:00:31
[11/05 16:17:03 d2.evaluation.evaluator]: Inference done 4625/4952. 0.0800 s / img. ETA=0:00:26
[11/05 16:17:08 d2.evaluation.evaluator]: Inference done 4687/4952. 0.0800 s / img. ETA=0:00:21
[11/05 16:17:13 d2.evaluation.evaluator]: Inference done 4749/4952. 0.0800 s / img. ETA=0:00:16
[11/05 16:17:18 d2.evaluation.evaluator]: Inference done 4810/4952. 0.0800 s / img. ETA=0:00:11
[11/05 16:17:23 d2.evaluation.evaluator]: Inference done 4871/4952. 0.0800 s / img. ETA=0:00:06
[11/05 16:17:28 d2.evaluation.evaluator]: Inference done 4933/4952. 0.0800 s / img. ETA=0:00:01
[11/05 16:17:30 d2.evaluation.evaluator]: Total inference time: 0:06:45.566370 (0.081982 s / img per device, on 1 devices)
[11/05 16:17:30 d2.evaluation.evaluator]: Total inference pure compute time: 0:06:35 (0.079976 s / img per device, on 1 devices)
[11/05 16:17:30 detectron2]: Image level evaluation complete for WR1_Mixed_Unknowns
[11/05 16:17:30 detectron2]: Results for WR1_Mixed_Unknowns
[11/05 16:17:30 detectron2]: AP for 0: 0.0
[11/05 16:17:30 detectron2]: AP for 1: 0.0
[11/05 16:17:30 detectron2]: AP for 2: 0.0
[11/05 16:17:31 detectron2]: AP for 3: 0.0
[11/05 16:17:31 detectron2]: AP for 4: 0.0
[11/05 16:17:31 detectron2]: AP for 5: 0.0
[11/05 16:17:31 detectron2]: AP for 6: 0.0
[11/05 16:17:31 detectron2]: AP for 7: 0.0
[11/05 16:17:31 detectron2]: AP for 8: 0.0
[11/05 16:17:32 detectron2]: AP for 9: 0.0
[11/05 16:17:32 detectron2]: AP for 10: 0.0
[11/05 16:17:32 detectron2]: AP for 11: 0.0
[11/05 16:17:32 detectron2]: AP for 12: 0.0
[11/05 16:17:32 detectron2]: AP for 13: 0.0
[11/05 16:17:32 detectron2]: AP for 14: 0.0
[11/05 16:17:32 detectron2]: AP for 15: 0.0
[11/05 16:17:33 detectron2]: AP for 16: 0.0
[11/05 16:17:33 detectron2]: AP for 17: 0.0
[11/05 16:17:33 detectron2]: AP for 18: 0.0
[11/05 16:17:33 detectron2]: AP for 19: 0.0
[11/05 16:17:33 detectron2]: mAP: 0.0
[11/05 16:17:33 detectron2]: Combined results for datasets custom_voc_2007_test, WR1_Mixed_Unknowns
[11/05 16:17:33 detectron2]: AP for 0: 0.7963087558746338
[11/05 16:17:33 detectron2]: AP for 1: 0.7675489783287048
[11/05 16:17:33 detectron2]: AP for 2: 0.6266054511070251
[11/05 16:17:33 detectron2]: AP for 3: 0.5853570699691772
[11/05 16:17:33 detectron2]: AP for 4: 0.5181585550308228
[11/05 16:17:33 detectron2]: AP for 5: 0.735167920589447
[11/05 16:17:33 detectron2]: AP for 6: 0.7736961245536804
[11/05 16:17:33 detectron2]: AP for 7: 0.8574226498603821
[11/05 16:17:33 detectron2]: AP for 8: 0.5211853384971619
[11/05 16:17:33 detectron2]: AP for 9: 0.6248680353164673
[11/05 16:17:33 detectron2]: AP for 10: 0.4676196277141571
[11/05 16:17:33 detectron2]: AP for 11: 0.7827640175819397
[11/05 16:17:33 detectron2]: AP for 12: 0.7323558330535889
[11/05 16:17:33 detectron2]: AP for 13: 0.8122451901435852
[11/05 16:17:33 detectron2]: AP for 14: 0.7807108163833618
[11/05 16:17:33 detectron2]: AP for 15: 0.42341139912605286
[11/05 16:17:33 detectron2]: AP for 16: 0.5794138312339783
[11/05 16:17:33 detectron2]: AP for 17: 0.5766554474830627
[11/05 16:17:33 detectron2]: AP for 18: 0.8170406818389893
[11/05 16:17:33 detectron2]: AP for 19: 0.6788268685340881
[11/05 16:17:33 detectron2]: mAP: 0.6728681325912476
[11/05 16:17:35 detectron2]: AP for 0.0: 0.808440625667572
[11/05 16:17:35 detectron2]: AP for 1.0: 0.7770674228668213
[11/05 16:17:35 detectron2]: AP for 2.0: 0.7009527683258057
[11/05 16:17:35 detectron2]: AP for 3.0: 0.5980304479598999
[11/05 16:17:35 detectron2]: AP for 4.0: 0.5711202621459961
[11/05 16:17:35 detectron2]: AP for 5.0: 0.7750421762466431
[11/05 16:17:35 detectron2]: AP for 6.0: 0.7938251495361328
[11/05 16:17:35 detectron2]: AP for 7.0: 0.88051837682724
[11/05 16:17:35 detectron2]: AP for 8.0: 0.5696967840194702
[11/05 16:17:35 detectron2]: AP for 9.0: 0.7881757616996765
[11/05 16:17:35 detectron2]: AP for 10.0: 0.6557427048683167
[11/05 16:17:35 detectron2]: AP for 11.0: 0.8663510680198669
[11/05 16:17:35 detectron2]: AP for 12.0: 0.8503947854042053
[11/05 16:17:35 detectron2]: AP for 13.0: 0.8271569609642029
[11/05 16:17:35 detectron2]: AP for 14.0: 0.788679838180542
[11/05 16:17:35 detectron2]: AP for 15.0: 0.4945039451122284
[11/05 16:17:35 detectron2]: AP for 16.0: 0.6859263181686401
[11/05 16:17:35 detectron2]: AP for 17.0: 0.6699324250221252
[11/05 16:17:35 detectron2]: AP for 18.0: 0.8395748734474182
[11/05 16:17:35 detectron2]: AP for 19.0: 0.7555124163627625
[11/05 16:17:35 detectron2]: mAP: 0.7348322868347168
[11/05 16:17:35 detectron2]: AP for 0.0: 0.8074251413345337
[11/05 16:17:35 detectron2]: AP for 1.0: 0.7757116556167603
[11/05 16:17:35 detectron2]: AP for 2.0: 0.6927441358566284
[11/05 16:17:35 detectron2]: AP for 3.0: 0.5954269170761108
[11/05 16:17:35 detectron2]: AP for 4.0: 0.564703106880188
[11/05 16:17:35 detectron2]: AP for 5.0: 0.7729313373565674
[11/05 16:17:35 detectron2]: AP for 6.0: 0.793378472328186
[11/05 16:17:35 detectron2]: AP for 7.0: 0.8772933483123779
[11/05 16:17:35 detectron2]: AP for 8.0: 0.5648582577705383
[11/05 16:17:35 detectron2]: AP for 9.0: 0.7626721858978271
[11/05 16:17:35 detectron2]: AP for 10.0: 0.6286855936050415
[11/05 16:17:35 detectron2]: AP for 11.0: 0.8550533056259155
[11/05 16:17:35 detectron2]: AP for 12.0: 0.8281086087226868
[11/05 16:17:35 detectron2]: AP for 13.0: 0.8261915445327759
[11/05 16:17:35 detectron2]: AP for 14.0: 0.7882627844810486
[11/05 16:17:35 detectron2]: AP for 15.0: 0.4836746156215668
[11/05 16:17:35 detectron2]: AP for 16.0: 0.6525588631629944
[11/05 16:17:35 detectron2]: AP for 17.0: 0.6623187065124512
[11/05 16:17:35 detectron2]: AP for 18.0: 0.8355180621147156
[11/05 16:17:35 detectron2]: AP for 19.0: 0.7453069686889648
[11/05 16:17:35 detectron2]: mAP: 0.7256411910057068
[11/05 16:17:35 detectron2]: AP for 0.0: 0.8071157336235046
[11/05 16:17:35 detectron2]: AP for 1.0: 0.7746164202690125
[11/05 16:17:35 detectron2]: AP for 2.0: 0.682853102684021
[11/05 16:17:35 detectron2]: AP for 3.0: 0.5943691730499268
[11/05 16:17:35 detectron2]: AP for 4.0: 0.5593255162239075
[11/05 16:17:35 detectron2]: AP for 5.0: 0.7691256999969482
[11/05 16:17:35 detectron2]: AP for 6.0: 0.7894589900970459
[11/05 16:17:35 detectron2]: AP for 7.0: 0.8758093118667603
[11/05 16:17:35 detectron2]: AP for 8.0: 0.5579070448875427
[11/05 16:17:35 detectron2]: AP for 9.0: 0.7415065765380859
[11/05 16:17:35 detectron2]: AP for 10.0: 0.6064231395721436
[11/05 16:17:35 detectron2]: AP for 11.0: 0.8470325469970703
[11/05 16:17:35 detectron2]: AP for 12.0: 0.8122345209121704
[11/05 16:17:35 detectron2]: AP for 13.0: 0.8237715363502502
[11/05 16:17:35 detectron2]: AP for 14.0: 0.7872760891914368
[11/05 16:17:35 detectron2]: AP for 15.0: 0.4740290641784668
[11/05 16:17:35 detectron2]: AP for 16.0: 0.6393458843231201
[11/05 16:17:35 detectron2]: AP for 17.0: 0.6541965007781982
[11/05 16:17:35 detectron2]: AP for 18.0: 0.8327656388282776
[11/05 16:17:35 detectron2]: AP for 19.0: 0.7375192046165466
[11/05 16:17:35 detectron2]: mAP: 0.7183341383934021
[11/05 16:17:35 detectron2]: AP for 0.0: 0.8049968481063843
[11/05 16:17:35 detectron2]: AP for 1.0: 0.7736841440200806
[11/05 16:17:35 detectron2]: AP for 2.0: 0.6740680932998657
[11/05 16:17:35 detectron2]: AP for 3.0: 0.5930513739585876
[11/05 16:17:35 detectron2]: AP for 4.0: 0.5483116507530212
[11/05 16:17:35 detectron2]: AP for 5.0: 0.7650444507598877
[11/05 16:17:35 detectron2]: AP for 6.0: 0.7877101898193359
[11/05 16:17:35 detectron2]: AP for 7.0: 0.8685283660888672
[11/05 16:17:35 detectron2]: AP for 8.0: 0.552049458026886
[11/05 16:17:35 detectron2]: AP for 9.0: 0.7125534415245056
[11/05 16:17:35 detectron2]: AP for 10.0: 0.5847764611244202
[11/05 16:17:35 detectron2]: AP for 11.0: 0.8309327960014343
[11/05 16:17:35 detectron2]: AP for 12.0: 0.7989715337753296
[11/05 16:17:35 detectron2]: AP for 13.0: 0.822482705116272
[11/05 16:17:35 detectron2]: AP for 14.0: 0.7867444753646851
[11/05 16:17:35 detectron2]: AP for 15.0: 0.467607706785202
[11/05 16:17:35 detectron2]: AP for 16.0: 0.624614953994751
[11/05 16:17:35 detectron2]: AP for 17.0: 0.64666748046875
[11/05 16:17:35 detectron2]: AP for 18.0: 0.83025723695755
[11/05 16:17:35 detectron2]: AP for 19.0: 0.7287084460258484
[11/05 16:17:35 detectron2]: mAP: 0.7100880742073059
[11/05 16:17:35 detectron2]: AP for 0.0: 0.8049968481063843
[11/05 16:17:35 detectron2]: AP for 1.0: 0.77252596616745
[11/05 16:17:35 detectron2]: AP for 2.0: 0.6679394245147705
[11/05 16:17:35 detectron2]: AP for 3.0: 0.5917534828186035
[11/05 16:17:35 detectron2]: AP for 4.0: 0.5431528687477112
[11/05 16:17:35 detectron2]: AP for 5.0: 0.763018786907196
[11/05 16:17:35 detectron2]: AP for 6.0: 0.7866762280464172
[11/05 16:17:35 detectron2]: AP for 7.0: 0.8674986362457275
[11/05 16:17:35 detectron2]: AP for 8.0: 0.5460099577903748
[11/05 16:17:35 detectron2]: AP for 9.0: 0.6895670294761658
[11/05 16:17:35 detectron2]: AP for 10.0: 0.5694316029548645
[11/05 16:17:35 detectron2]: AP for 11.0: 0.8236904144287109
[11/05 16:17:35 detectron2]: AP for 12.0: 0.7914615273475647
[11/05 16:17:35 detectron2]: AP for 13.0: 0.8206184506416321
[11/05 16:17:36 detectron2]: AP for 14.0: 0.7857439517974854
[11/05 16:17:36 detectron2]: AP for 15.0: 0.4596113860607147
[11/05 16:17:36 detectron2]: AP for 16.0: 0.6189888119697571
[11/05 16:17:36 detectron2]: AP for 17.0: 0.6328640580177307
[11/05 16:17:36 detectron2]: AP for 18.0: 0.827954888343811
[11/05 16:17:36 detectron2]: AP for 19.0: 0.7236618995666504
[11/05 16:17:36 detectron2]: mAP: 0.7043583989143372
[11/05 16:17:36 detectron2]: AP for 0.0: 0.8033401370048523
[11/05 16:17:36 detectron2]: AP for 1.0: 0.7720251679420471
[11/05 16:17:36 detectron2]: AP for 2.0: 0.662475049495697
[11/05 16:17:36 detectron2]: AP for 3.0: 0.5906260013580322
[11/05 16:17:36 detectron2]: AP for 4.0: 0.5390602350234985
[11/05 16:17:36 detectron2]: AP for 5.0: 0.7610435485839844
[11/05 16:17:36 detectron2]: AP for 6.0: 0.7842805981636047
[11/05 16:17:36 detectron2]: AP for 7.0: 0.8644418716430664
[11/05 16:17:36 detectron2]: AP for 8.0: 0.5414519309997559
[11/05 16:17:36 detectron2]: AP for 9.0: 0.683279275894165
[11/05 16:17:36 detectron2]: AP for 10.0: 0.5558233261108398
[11/05 16:17:36 detectron2]: AP for 11.0: 0.8159647583961487
[11/05 16:17:36 detectron2]: AP for 12.0: 0.7788593769073486
[11/05 16:17:36 detectron2]: AP for 13.0: 0.819983720779419
[11/05 16:17:36 detectron2]: AP for 14.0: 0.7851929068565369
[11/05 16:17:36 detectron2]: AP for 15.0: 0.4529585838317871
[11/05 16:17:36 detectron2]: AP for 16.0: 0.613614022731781
[11/05 16:17:36 detectron2]: AP for 17.0: 0.6216874122619629
[11/05 16:17:36 detectron2]: AP for 18.0: 0.8259549736976624
[11/05 16:17:36 detectron2]: AP for 19.0: 0.7115657329559326
[11/05 16:17:36 detectron2]: mAP: 0.6991814374923706
[11/05 16:17:36 detectron2]: AP for 0.0: 0.7981526851654053
[11/05 16:17:36 detectron2]: AP for 1.0: 0.7715327739715576
[11/05 16:17:36 detectron2]: AP for 2.0: 0.6526334285736084
[11/05 16:17:36 detectron2]: AP for 3.0: 0.5900876522064209
[11/05 16:17:36 detectron2]: AP for 4.0: 0.5349059700965881
[11/05 16:17:36 detectron2]: AP for 5.0: 0.756045937538147
[11/05 16:17:36 detectron2]: AP for 6.0: 0.782234251499176
[11/05 16:17:36 detectron2]: AP for 7.0: 0.8632158637046814
[11/05 16:17:36 detectron2]: AP for 8.0: 0.5364787578582764
[11/05 16:17:36 detectron2]: AP for 9.0: 0.6754810810089111
[11/05 16:17:36 detectron2]: AP for 10.0: 0.536702573299408
[11/05 16:17:36 detectron2]: AP for 11.0: 0.8106412887573242
[11/05 16:17:36 detectron2]: AP for 12.0: 0.7694507837295532
[11/05 16:17:36 detectron2]: AP for 13.0: 0.8182892203330994
[11/05 16:17:36 detectron2]: AP for 14.0: 0.7837110757827759
[11/05 16:17:36 detectron2]: AP for 15.0: 0.4455527365207672
[11/05 16:17:36 detectron2]: AP for 16.0: 0.610645592212677
[11/05 16:17:36 detectron2]: AP for 17.0: 0.6072170734405518
[11/05 16:17:36 detectron2]: AP for 18.0: 0.8240447640419006
[11/05 16:17:36 detectron2]: AP for 19.0: 0.7019145488739014
[11/05 16:17:36 detectron2]: mAP: 0.6934468746185303
[11/05 16:17:36 detectron2]: AP for 0.0: 0.7981526851654053
[11/05 16:17:36 detectron2]: AP for 1.0: 0.7703367471694946
[11/05 16:17:36 detectron2]: AP for 2.0: 0.6466332077980042
[11/05 16:17:36 detectron2]: AP for 3.0: 0.5887355804443359
[11/05 16:17:36 detectron2]: AP for 4.0: 0.5296403169631958
[11/05 16:17:36 detectron2]: AP for 5.0: 0.752450704574585
[11/05 16:17:36 detectron2]: AP for 6.0: 0.7786900997161865
[11/05 16:17:36 detectron2]: AP for 7.0: 0.8613957762718201
[11/05 16:17:36 detectron2]: AP for 8.0: 0.5334128737449646
[11/05 16:17:36 detectron2]: AP for 9.0: 0.667311429977417
[11/05 16:17:36 detectron2]: AP for 10.0: 0.522779107093811
[11/05 16:17:36 detectron2]: AP for 11.0: 0.8044999837875366
[11/05 16:17:36 detectron2]: AP for 12.0: 0.7649497389793396
[11/05 16:17:36 detectron2]: AP for 13.0: 0.8165357112884521
[11/05 16:17:36 detectron2]: AP for 14.0: 0.7826014161109924
[11/05 16:17:36 detectron2]: AP for 15.0: 0.43944063782691956
[11/05 16:17:36 detectron2]: AP for 16.0: 0.6019951701164246
[11/05 16:17:36 detectron2]: AP for 17.0: 0.6002336740493774
[11/05 16:17:36 detectron2]: AP for 18.0: 0.8227123022079468
[11/05 16:17:36 detectron2]: AP for 19.0: 0.6941025257110596
[11/05 16:17:36 detectron2]: mAP: 0.6888304948806763
[11/05 16:17:36 detectron2]: AP for 0.0: 0.7972252368927002
[11/05 16:17:36 detectron2]: AP for 1.0: 0.7689640522003174
[11/05 16:17:36 detectron2]: AP for 2.0: 0.6400614380836487
[11/05 16:17:36 detectron2]: AP for 3.0: 0.5880363583564758
[11/05 16:17:36 detectron2]: AP for 4.0: 0.5246222019195557
[11/05 16:17:36 detectron2]: AP for 5.0: 0.7388509511947632
[11/05 16:17:36 detectron2]: AP for 6.0: 0.7777529358863831
[11/05 16:17:36 detectron2]: AP for 7.0: 0.859672486782074
[11/05 16:17:36 detectron2]: AP for 8.0: 0.5305948257446289
[11/05 16:17:36 detectron2]: AP for 9.0: 0.6403424739837646
[11/05 16:17:36 detectron2]: AP for 10.0: 0.49243098497390747
[11/05 16:17:36 detectron2]: AP for 11.0: 0.793095052242279
[11/05 16:17:36 detectron2]: AP for 12.0: 0.7520943284034729
[11/05 16:17:36 detectron2]: AP for 13.0: 0.814483642578125
[11/05 16:17:36 detectron2]: AP for 14.0: 0.7821661233901978
[11/05 16:17:36 detectron2]: AP for 15.0: 0.43383291363716125
[11/05 16:17:36 detectron2]: AP for 16.0: 0.5963069200515747
[11/05 16:17:36 detectron2]: AP for 17.0: 0.5941689014434814
[11/05 16:17:36 detectron2]: AP for 18.0: 0.8211178183555603
[11/05 16:17:36 detectron2]: AP for 19.0: 0.6900832653045654
[11/05 16:17:36 detectron2]: mAP: 0.6817951798439026
[11/05 16:17:37 detectron2]: AP for 0.0: 0.7972252368927002
[11/05 16:17:37 detectron2]: AP for 1.0: 0.7681931853294373
[11/05 16:17:37 detectron2]: AP for 2.0: 0.631178617477417
[11/05 16:17:37 detectron2]: AP for 3.0: 0.5863597393035889
[11/05 16:17:37 detectron2]: AP for 4.0: 0.5224187970161438
[11/05 16:17:37 detectron2]: AP for 5.0: 0.7380679845809937
[11/05 16:17:37 detectron2]: AP for 6.0: 0.7751660943031311
[11/05 16:17:37 detectron2]: AP for 7.0: 0.8585228323936462
[11/05 16:17:37 detectron2]: AP for 8.0: 0.5277168154716492
[11/05 16:17:37 detectron2]: AP for 9.0: 0.6364344954490662
[11/05 16:17:37 detectron2]: AP for 10.0: 0.47778645157814026
[11/05 16:17:37 detectron2]: AP for 11.0: 0.788770318031311
[11/05 16:17:37 detectron2]: AP for 12.0: 0.7422476410865784
[11/05 16:17:37 detectron2]: AP for 13.0: 0.8132762312889099
[11/05 16:17:37 detectron2]: AP for 14.0: 0.7815334796905518
[11/05 16:17:37 detectron2]: AP for 15.0: 0.4281567335128784
[11/05 16:17:37 detectron2]: AP for 16.0: 0.5900235772132874
[11/05 16:17:37 detectron2]: AP for 17.0: 0.5849940776824951
[11/05 16:17:37 detectron2]: AP for 18.0: 0.8193250894546509
[11/05 16:17:37 detectron2]: AP for 19.0: 0.6870383024215698
[11/05 16:17:37 detectron2]: mAP: 0.6777218580245972
[11/05 16:17:37 detectron2]: AP for 0.0: 0.7963087558746338
[11/05 16:17:37 detectron2]: AP for 1.0: 0.7675489783287048
[11/05 16:17:37 detectron2]: AP for 2.0: 0.6266054511070251
[11/05 16:17:37 detectron2]: AP for 3.0: 0.5853570699691772
[11/05 16:17:37 detectron2]: AP for 4.0: 0.5181585550308228
[11/05 16:17:37 detectron2]: AP for 5.0: 0.735167920589447
[11/05 16:17:37 detectron2]: AP for 6.0: 0.7736961245536804
[11/05 16:17:37 detectron2]: AP for 7.0: 0.8574226498603821
[11/05 16:17:37 detectron2]: AP for 8.0: 0.5211853384971619
[11/05 16:17:37 detectron2]: AP for 9.0: 0.6248680353164673
[11/05 16:17:37 detectron2]: AP for 10.0: 0.4676196277141571
[11/05 16:17:37 detectron2]: AP for 11.0: 0.7827640175819397
[11/05 16:17:37 detectron2]: AP for 12.0: 0.7323558330535889
[11/05 16:17:37 detectron2]: AP for 13.0: 0.8122451901435852
[11/05 16:17:37 detectron2]: AP for 14.0: 0.7807108163833618
[11/05 16:17:37 detectron2]: AP for 15.0: 0.42341139912605286
[11/05 16:17:37 detectron2]: AP for 16.0: 0.5794138312339783
[11/05 16:17:37 detectron2]: AP for 17.0: 0.5766554474830627
[11/05 16:17:37 detectron2]: AP for 18.0: 0.8170406818389893
[11/05 16:17:37 detectron2]: AP for 19.0: 0.6788268685340881
[11/05 16:17:37 detectron2]: mAP: 0.6728681325912476
akshay-raj-dhamija commented 3 years ago

Hello Deepak, I am unable to replicate the issue you are facing about the multi-gpu evaluation which I have verified both by observing the gpu usage in nvidia-smi and from the logs

[11/22 01:07:31 d2.evaluation.evaluator]: Total inference time: 0:01:12.698808 (0.118402 s / img per device, on 8 devices)
[11/22 01:07:31 d2.evaluation.evaluator]: Total inference pure compute time: 0:01:10 (0.114994 s / img per device, on 8 devices)

If you are still facing this issue, please make sure you are on the most recent commit and share the exact command used along with a screen shot of nvidia-smi.

Regarding the meaning of the log outputs, please have a look at the WIC curve in Figure 5 of the paper. Each average precision (AP) value corresponds to a specific object's AP on a specific wilderness level on the x-axis as in Figure 5. These values are later used in the formula mentioned in the paper to calculate the wilderness impact. Also please make sure you are on the latest commit for improved log output, your previous log output corresponds to a stale commit. I hope this helps.

deepaksinghcv commented 3 years ago

Hello Akshay, Thank you for the prompt response. I noticed that my previous issue submission was on a stale commit. I updated my cloned repo to the latest commit and tried to reproduce the result on Faster-RCNN. [Table. 1 of your paper] Table 1. shows that for Faster RCNN, we should have 81.86% mAP on the PASCAL test, and 77.09% for WR1. I tried the following command for training:

python main.py --num-gpus 4 --config-file training_configs/faster_rcnn_R_50_FPN.yaml

During test it throws the error. Notice that its on 4 GPUS. Error log:

==========================================
SLURM_JOB_ID = 265254
SLURM_NODELIST = gnode02
SLURM_JOB_GPUS = 0,1,2,3
==========================================
Command Line Args: Namespace(config_file='training_configs/faster_rcnn_R_50_FPN.yaml', dist_url='tcp://127.0.0.1:50712', eval_only=False, machine_rank=0, num_gpus=4, num_machines=1, opts=[], resume=False)
[11/26 15:46:20 detectron2]: Rank of current process: 0. World size: 4
[11/26 15:46:22 detectron2]: Environment info:
----------------------  -------------------------------------------------------------------------------
sys.platform            linux
Python                  3.7.3 | packaged by conda-forge | (default, Jul  1 2019, 21:52:21) [GCC 7.3.0]
numpy                   1.16.4
detectron2              0.3 @/home/dksingh/inseg/detectron2/detectron2
Compiler                GCC 5.5
CUDA compiler           CUDA 10.2
detectron2 arch flags   6.1
DETECTRON2_ENV_MODULE   <not set>
PyTorch                 1.6.0 @/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch
PyTorch debug build     False
GPU available           True
GPU 0,1,2,3             GeForce GTX 1080 Ti (arch=6.1)
CUDA_HOME               /usr/local/cuda-10.2
Pillow                  7.1.2
torchvision             0.7.0 @/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torchvision
torchvision arch flags  3.5, 5.0, 6.0, 7.0, 7.5
fvcore                  0.1.2.post20201103
cv2                     4.1.0
----------------------  -------------------------------------------------------------------------------
PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2019.0.4 Product Build 20190411 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 10.2
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
  - CuDNN 7.6.5
  - Magma 2.5.2
  - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF, 

[11/26 15:46:22 detectron2]: Command line arguments: Namespace(config_file='training_configs/faster_rcnn_R_50_FPN.yaml', dist_url='tcp://127.0.0.1:50712', eval_only=False, machine_rank=0, num_gpus=4, num_machines=1, opts=[], resume=False)
[11/26 15:46:22 detectron2]: Contents of args.config_file=training_configs/faster_rcnn_R_50_FPN.yaml:
# Configuration for training with 4 gpus
_BASE_: "~/detectron2/detectron2/configs/Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: False
  RESNETS:
    DEPTH: 50
  ROI_HEADS:
    NUM_CLASSES: 20
DATASETS:
  TRAIN: ('custom_voc_2007_train','custom_voc_2007_val','custom_voc_2012_train','custom_voc_2012_val',)
  TEST: ('custom_voc_2007_test','WR1_Mixed_Unknowns')
#  TEST: ('custom_voc_2007_test','Mixed_Unknowns')

SOLVER:
  BASE_LR: 0.01
  STEPS: (24000, 32000)
  MAX_ITER: 36000
  WARMUP_ITERS: 100
OUTPUT_DIR: /ssd_scratch/cvit/dksingh/overlooked_elephant/fasterrcnn/fasterrcnn_36k_iters/
[11/26 15:46:22 detectron2]: Running with full config:
CUDNN_BENCHMARK: False
DATALOADER:
  ASPECT_RATIO_GROUPING: True
  FILTER_EMPTY_ANNOTATIONS: True
  NUM_WORKERS: 4
  REPEAT_THRESHOLD: 0.0
  SAMPLER_TRAIN: TrainingSampler
DATASETS:
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
  PRECOMPUTED_PROPOSAL_TOPK_TRAIN: 2000
  PROPOSAL_FILES_TEST: ()
  PROPOSAL_FILES_TRAIN: ()
  TEST: ('custom_voc_2007_test', 'WR1_Mixed_Unknowns')
  TRAIN: ('custom_voc_2007_train', 'custom_voc_2007_val', 'custom_voc_2012_train', 'custom_voc_2012_val')
GLOBAL:
  HACK: 1.0
INPUT:
  CROP:
    ENABLED: False
    SIZE: [0.9, 0.9]
    TYPE: relative_range
  FORMAT: BGR
  MASK_FORMAT: polygon
  MAX_SIZE_TEST: 1333
  MAX_SIZE_TRAIN: 1333
  MIN_SIZE_TEST: 800
  MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)
  MIN_SIZE_TRAIN_SAMPLING: choice
  RANDOM_FLIP: horizontal
MODEL:
  ANCHOR_GENERATOR:
    ANGLES: [[-90, 0, 90]]
    ASPECT_RATIOS: [[0.5, 1.0, 2.0]]
    NAME: DefaultAnchorGenerator
    OFFSET: 0.0
    SIZES: [[32], [64], [128], [256], [512]]
  BACKBONE:
    FREEZE_AT: 2
    NAME: build_resnet_fpn_backbone
  DEVICE: cuda
  FPN:
    FUSE_TYPE: sum
    IN_FEATURES: ['res2', 'res3', 'res4', 'res5']
    NORM: 
    OUT_CHANNELS: 256
  KEYPOINT_ON: False
  LOAD_PROPOSALS: False
  MASK_ON: False
  META_ARCHITECTURE: GeneralizedRCNN
  PANOPTIC_FPN:
    COMBINE:
      ENABLED: True
      INSTANCES_CONFIDENCE_THRESH: 0.5
      OVERLAP_THRESH: 0.5
      STUFF_AREA_LIMIT: 4096
    INSTANCE_LOSS_WEIGHT: 1.0
  PIXEL_MEAN: [103.53, 116.28, 123.675]
  PIXEL_STD: [1.0, 1.0, 1.0]
  PROPOSAL_GENERATOR:
    MIN_SIZE: 0
    NAME: RPN
  RESNETS:
    DEFORM_MODULATED: False
    DEFORM_NUM_GROUPS: 1
    DEFORM_ON_PER_STAGE: [False, False, False, False]
    DEPTH: 50
    NORM: FrozenBN
    NUM_GROUPS: 1
    OUT_FEATURES: ['res2', 'res3', 'res4', 'res5']
    RES2_OUT_CHANNELS: 256
    RES5_DILATION: 1
    STEM_OUT_CHANNELS: 64
    STRIDE_IN_1X1: True
    WIDTH_PER_GROUP: 64
  RETINANET:
    BBOX_REG_LOSS_TYPE: smooth_l1
    BBOX_REG_WEIGHTS: (1.0, 1.0, 1.0, 1.0)
    FOCAL_LOSS_ALPHA: 0.25
    FOCAL_LOSS_GAMMA: 2.0
    IN_FEATURES: ['p3', 'p4', 'p5', 'p6', 'p7']
    IOU_LABELS: [0, -1, 1]
    IOU_THRESHOLDS: [0.4, 0.5]
    NMS_THRESH_TEST: 0.5
    NORM: 
    NUM_CLASSES: 80
    NUM_CONVS: 4
    PRIOR_PROB: 0.01
    SCORE_THRESH_TEST: 0.05
    SMOOTH_L1_LOSS_BETA: 0.1
    TOPK_CANDIDATES_TEST: 1000
  ROI_BOX_CASCADE_HEAD:
    BBOX_REG_WEIGHTS: ((10.0, 10.0, 5.0, 5.0), (20.0, 20.0, 10.0, 10.0), (30.0, 30.0, 15.0, 15.0))
    IOUS: (0.5, 0.6, 0.7)
  ROI_BOX_HEAD:
    BBOX_REG_LOSS_TYPE: smooth_l1
    BBOX_REG_LOSS_WEIGHT: 1.0
    BBOX_REG_WEIGHTS: (10.0, 10.0, 5.0, 5.0)
    CLS_AGNOSTIC_BBOX_REG: False
    CONV_DIM: 256
    FC_DIM: 1024
    NAME: FastRCNNConvFCHead
    NORM: 
    NUM_CONV: 0
    NUM_FC: 2
    POOLER_RESOLUTION: 7
    POOLER_SAMPLING_RATIO: 0
    POOLER_TYPE: ROIAlignV2
    SMOOTH_L1_BETA: 0.0
    TRAIN_ON_PRED_BOXES: False
  ROI_HEADS:
    BATCH_SIZE_PER_IMAGE: 512
    IN_FEATURES: ['p2', 'p3', 'p4', 'p5']
    IOU_LABELS: [0, 1]
    IOU_THRESHOLDS: [0.5]
    NAME: StandardROIHeads
    NMS_THRESH_TEST: 0.5
    NUM_CLASSES: 20
    POSITIVE_FRACTION: 0.25
    PROPOSAL_APPEND_GT: True
    SCORE_THRESH_TEST: 0.05
  ROI_KEYPOINT_HEAD:
    CONV_DIMS: (512, 512, 512, 512, 512, 512, 512, 512)
    LOSS_WEIGHT: 1.0
    MIN_KEYPOINTS_PER_IMAGE: 1
    NAME: KRCNNConvDeconvUpsampleHead
    NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS: True
    NUM_KEYPOINTS: 17
    POOLER_RESOLUTION: 14
    POOLER_SAMPLING_RATIO: 0
    POOLER_TYPE: ROIAlignV2
  ROI_MASK_HEAD:
    CLS_AGNOSTIC_MASK: False
    CONV_DIM: 256
    NAME: MaskRCNNConvUpsampleHead
    NORM: 
    NUM_CONV: 4
    POOLER_RESOLUTION: 14
    POOLER_SAMPLING_RATIO: 0
    POOLER_TYPE: ROIAlignV2
  RPN:
    BATCH_SIZE_PER_IMAGE: 256
    BBOX_REG_LOSS_TYPE: smooth_l1
    BBOX_REG_LOSS_WEIGHT: 1.0
    BBOX_REG_WEIGHTS: (1.0, 1.0, 1.0, 1.0)
    BOUNDARY_THRESH: -1
    HEAD_NAME: StandardRPNHead
    IN_FEATURES: ['p2', 'p3', 'p4', 'p5', 'p6']
    IOU_LABELS: [0, -1, 1]
    IOU_THRESHOLDS: [0.3, 0.7]
    LOSS_WEIGHT: 1.0
    NMS_THRESH: 0.7
    POSITIVE_FRACTION: 0.5
    POST_NMS_TOPK_TEST: 1000
    POST_NMS_TOPK_TRAIN: 1000
    PRE_NMS_TOPK_TEST: 1000
    PRE_NMS_TOPK_TRAIN: 2000
    SMOOTH_L1_BETA: 0.0
  SEM_SEG_HEAD:
    COMMON_STRIDE: 4
    CONVS_DIM: 128
    IGNORE_VALUE: 255
    IN_FEATURES: ['p2', 'p3', 'p4', 'p5']
    LOSS_WEIGHT: 1.0
    NAME: SemSegFPNHead
    NORM: GN
    NUM_CLASSES: 54
  WEIGHTS: detectron2://ImageNetPretrained/MSRA/R-50.pkl
OUTPUT_DIR: /ssd_scratch/cvit/dksingh/overlooked_elephant/fasterrcnn/fasterrcnn_36k_iters/
SEED: -1
SOLVER:
  AMP:
    ENABLED: False
  BASE_LR: 0.01
  BIAS_LR_FACTOR: 1.0
  CHECKPOINT_PERIOD: 5000
  CLIP_GRADIENTS:
    CLIP_TYPE: value
    CLIP_VALUE: 1.0
    ENABLED: False
    NORM_TYPE: 2.0
  GAMMA: 0.1
  IMS_PER_BATCH: 16
  LR_SCHEDULER_NAME: WarmupMultiStepLR
  MAX_ITER: 36000
  MOMENTUM: 0.9
  NESTEROV: False
  REFERENCE_WORLD_SIZE: 0
  STEPS: (24000, 32000)
  WARMUP_FACTOR: 0.001
  WARMUP_ITERS: 100
  WARMUP_METHOD: linear
  WEIGHT_DECAY: 0.0001
  WEIGHT_DECAY_BIAS: 0.0001
  WEIGHT_DECAY_NORM: 0.0
TEST:
  AUG:
    ENABLED: False
    FLIP: True
    MAX_SIZE: 4000
    MIN_SIZES: (400, 500, 600, 700, 800, 900, 1000, 1100, 1200)
  DETECTIONS_PER_IMAGE: 100
  EVAL_PERIOD: 0
  EXPECTED_RESULTS: []
  KEYPOINT_OKS_SIGMAS: []
  PRECISE_BN:
    ENABLED: False
    NUM_ITER: 200
VERSION: 2
VIS_PERIOD: 0
[11/26 15:46:22 detectron2]: Full config saved to /ssd_scratch/cvit/dksingh/overlooked_elephant/fasterrcnn/fasterrcnn_36k_iters/config.yaml
[11/26 15:46:22 d2.utils.env]: Using a generated random seed 22657084
[11/26 15:46:23 detectron2]: Model:
GeneralizedRCNN(
  (backbone): FPN(
    (fpn_lateral2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (top_block): LastLevelMaxPool()
    (bottom_up): ResNet(
      (stem): BasicStem(
        (conv1): Conv2d(
          3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
      )
      (res2): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv1): Conv2d(
            64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv2): Conv2d(
            64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv3): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv2): Conv2d(
            64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv3): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv2): Conv2d(
            64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv3): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
        )
      )
      (res3): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv1): Conv2d(
            256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
        (3): BottleneckBlock(
          (conv1): Conv2d(
            512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
      )
      (res4): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
          (conv1): Conv2d(
            512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (3): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (4): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (5): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
      )
      (res5): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
          (conv1): Conv2d(
            1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv2): Conv2d(
            512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv3): Conv2d(
            512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv2): Conv2d(
            512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv3): Conv2d(
            512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv2): Conv2d(
            512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv3): Conv2d(
            512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
        )
      )
    )
  )
  (proposal_generator): RPN(
    (rpn_head): StandardRPNHead(
      (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (objectness_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1))
      (anchor_deltas): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1))
    )
    (anchor_generator): DefaultAnchorGenerator(
      (cell_anchors): BufferList()
    )
  )
  (roi_heads): StandardROIHeads(
    (box_pooler): ROIPooler(
      (level_poolers): ModuleList(
        (0): ROIAlign(output_size=(7, 7), spatial_scale=0.25, sampling_ratio=0, aligned=True)
        (1): ROIAlign(output_size=(7, 7), spatial_scale=0.125, sampling_ratio=0, aligned=True)
        (2): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True)
        (3): ROIAlign(output_size=(7, 7), spatial_scale=0.03125, sampling_ratio=0, aligned=True)
      )
    )
    (box_head): FastRCNNConvFCHead(
      (flatten): Flatten()
      (fc1): Linear(in_features=12544, out_features=1024, bias=True)
      (fc_relu1): ReLU()
      (fc2): Linear(in_features=1024, out_features=1024, bias=True)
      (fc_relu2): ReLU()
    )
    (box_predictor): FastRCNNOutputLayers(
      (cls_score): Linear(in_features=1024, out_features=21, bias=True)
      (bbox_pred): Linear(in_features=1024, out_features=80, bias=True)
    )
  )
)
[11/26 15:46:23 fvcore.common.checkpoint]: Loading checkpoint from detectron2://ImageNetPretrained/MSRA/R-50.pkl
[11/26 15:46:24 fvcore.common.file_io]: URL https://dl.fbaipublicfiles.com/detectron2/ImageNetPretrained/MSRA/R-50.pkl cached in /home/dksingh/.torch/fvcore_cache/detectron2/ImageNetPretrained/MSRA/R-50.pkl
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: Remapping C2 weights ......
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.conv1.norm.bias            loaded from res2_0_branch2a_bn_beta           of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.conv1.norm.running_mean    loaded from res2_0_branch2a_bn_running_mean   of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.conv1.norm.running_var     loaded from res2_0_branch2a_bn_running_var    of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.conv1.norm.weight          loaded from res2_0_branch2a_bn_gamma          of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.conv1.weight               loaded from res2_0_branch2a_w                 of shape (64, 64, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.conv2.norm.bias            loaded from res2_0_branch2b_bn_beta           of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.conv2.norm.running_mean    loaded from res2_0_branch2b_bn_running_mean   of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.conv2.norm.running_var     loaded from res2_0_branch2b_bn_running_var    of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.conv2.norm.weight          loaded from res2_0_branch2b_bn_gamma          of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.conv2.weight               loaded from res2_0_branch2b_w                 of shape (64, 64, 3, 3)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.conv3.norm.bias            loaded from res2_0_branch2c_bn_beta           of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.conv3.norm.running_mean    loaded from res2_0_branch2c_bn_running_mean   of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.conv3.norm.running_var     loaded from res2_0_branch2c_bn_running_var    of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.conv3.norm.weight          loaded from res2_0_branch2c_bn_gamma          of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.conv3.weight               loaded from res2_0_branch2c_w                 of shape (256, 64, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.shortcut.norm.bias         loaded from res2_0_branch1_bn_beta            of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.shortcut.norm.running_mean loaded from res2_0_branch1_bn_running_mean    of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.shortcut.norm.running_var  loaded from res2_0_branch1_bn_running_var     of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.shortcut.norm.weight       loaded from res2_0_branch1_bn_gamma           of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.0.shortcut.weight            loaded from res2_0_branch1_w                  of shape (256, 64, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.1.conv1.norm.bias            loaded from res2_1_branch2a_bn_beta           of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.1.conv1.norm.running_mean    loaded from res2_1_branch2a_bn_running_mean   of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.1.conv1.norm.running_var     loaded from res2_1_branch2a_bn_running_var    of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.1.conv1.norm.weight          loaded from res2_1_branch2a_bn_gamma          of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.1.conv1.weight               loaded from res2_1_branch2a_w                 of shape (64, 256, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.1.conv2.norm.bias            loaded from res2_1_branch2b_bn_beta           of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.1.conv2.norm.running_mean    loaded from res2_1_branch2b_bn_running_mean   of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.1.conv2.norm.running_var     loaded from res2_1_branch2b_bn_running_var    of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.1.conv2.norm.weight          loaded from res2_1_branch2b_bn_gamma          of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.1.conv2.weight               loaded from res2_1_branch2b_w                 of shape (64, 64, 3, 3)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.1.conv3.norm.bias            loaded from res2_1_branch2c_bn_beta           of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.1.conv3.norm.running_mean    loaded from res2_1_branch2c_bn_running_mean   of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.1.conv3.norm.running_var     loaded from res2_1_branch2c_bn_running_var    of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.1.conv3.norm.weight          loaded from res2_1_branch2c_bn_gamma          of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.1.conv3.weight               loaded from res2_1_branch2c_w                 of shape (256, 64, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.2.conv1.norm.bias            loaded from res2_2_branch2a_bn_beta           of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.2.conv1.norm.running_mean    loaded from res2_2_branch2a_bn_running_mean   of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.2.conv1.norm.running_var     loaded from res2_2_branch2a_bn_running_var    of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.2.conv1.norm.weight          loaded from res2_2_branch2a_bn_gamma          of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.2.conv1.weight               loaded from res2_2_branch2a_w                 of shape (64, 256, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.2.conv2.norm.bias            loaded from res2_2_branch2b_bn_beta           of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.2.conv2.norm.running_mean    loaded from res2_2_branch2b_bn_running_mean   of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.2.conv2.norm.running_var     loaded from res2_2_branch2b_bn_running_var    of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.2.conv2.norm.weight          loaded from res2_2_branch2b_bn_gamma          of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.2.conv2.weight               loaded from res2_2_branch2b_w                 of shape (64, 64, 3, 3)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.2.conv3.norm.bias            loaded from res2_2_branch2c_bn_beta           of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.2.conv3.norm.running_mean    loaded from res2_2_branch2c_bn_running_mean   of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.2.conv3.norm.running_var     loaded from res2_2_branch2c_bn_running_var    of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.2.conv3.norm.weight          loaded from res2_2_branch2c_bn_gamma          of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res2.2.conv3.weight               loaded from res2_2_branch2c_w                 of shape (256, 64, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.conv1.norm.bias            loaded from res3_0_branch2a_bn_beta           of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.conv1.norm.running_mean    loaded from res3_0_branch2a_bn_running_mean   of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.conv1.norm.running_var     loaded from res3_0_branch2a_bn_running_var    of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.conv1.norm.weight          loaded from res3_0_branch2a_bn_gamma          of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.conv1.weight               loaded from res3_0_branch2a_w                 of shape (128, 256, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.conv2.norm.bias            loaded from res3_0_branch2b_bn_beta           of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.conv2.norm.running_mean    loaded from res3_0_branch2b_bn_running_mean   of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.conv2.norm.running_var     loaded from res3_0_branch2b_bn_running_var    of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.conv2.norm.weight          loaded from res3_0_branch2b_bn_gamma          of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.conv2.weight               loaded from res3_0_branch2b_w                 of shape (128, 128, 3, 3)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.conv3.norm.bias            loaded from res3_0_branch2c_bn_beta           of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.conv3.norm.running_mean    loaded from res3_0_branch2c_bn_running_mean   of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.conv3.norm.running_var     loaded from res3_0_branch2c_bn_running_var    of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.conv3.norm.weight          loaded from res3_0_branch2c_bn_gamma          of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.conv3.weight               loaded from res3_0_branch2c_w                 of shape (512, 128, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.shortcut.norm.bias         loaded from res3_0_branch1_bn_beta            of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.shortcut.norm.running_mean loaded from res3_0_branch1_bn_running_mean    of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.shortcut.norm.running_var  loaded from res3_0_branch1_bn_running_var     of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.shortcut.norm.weight       loaded from res3_0_branch1_bn_gamma           of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.0.shortcut.weight            loaded from res3_0_branch1_w                  of shape (512, 256, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.1.conv1.norm.bias            loaded from res3_1_branch2a_bn_beta           of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.1.conv1.norm.running_mean    loaded from res3_1_branch2a_bn_running_mean   of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.1.conv1.norm.running_var     loaded from res3_1_branch2a_bn_running_var    of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.1.conv1.norm.weight          loaded from res3_1_branch2a_bn_gamma          of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.1.conv1.weight               loaded from res3_1_branch2a_w                 of shape (128, 512, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.1.conv2.norm.bias            loaded from res3_1_branch2b_bn_beta           of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.1.conv2.norm.running_mean    loaded from res3_1_branch2b_bn_running_mean   of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.1.conv2.norm.running_var     loaded from res3_1_branch2b_bn_running_var    of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.1.conv2.norm.weight          loaded from res3_1_branch2b_bn_gamma          of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.1.conv2.weight               loaded from res3_1_branch2b_w                 of shape (128, 128, 3, 3)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.1.conv3.norm.bias            loaded from res3_1_branch2c_bn_beta           of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.1.conv3.norm.running_mean    loaded from res3_1_branch2c_bn_running_mean   of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.1.conv3.norm.running_var     loaded from res3_1_branch2c_bn_running_var    of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.1.conv3.norm.weight          loaded from res3_1_branch2c_bn_gamma          of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.1.conv3.weight               loaded from res3_1_branch2c_w                 of shape (512, 128, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.2.conv1.norm.bias            loaded from res3_2_branch2a_bn_beta           of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.2.conv1.norm.running_mean    loaded from res3_2_branch2a_bn_running_mean   of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.2.conv1.norm.running_var     loaded from res3_2_branch2a_bn_running_var    of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.2.conv1.norm.weight          loaded from res3_2_branch2a_bn_gamma          of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.2.conv1.weight               loaded from res3_2_branch2a_w                 of shape (128, 512, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.2.conv2.norm.bias            loaded from res3_2_branch2b_bn_beta           of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.2.conv2.norm.running_mean    loaded from res3_2_branch2b_bn_running_mean   of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.2.conv2.norm.running_var     loaded from res3_2_branch2b_bn_running_var    of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.2.conv2.norm.weight          loaded from res3_2_branch2b_bn_gamma          of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.2.conv2.weight               loaded from res3_2_branch2b_w                 of shape (128, 128, 3, 3)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.2.conv3.norm.bias            loaded from res3_2_branch2c_bn_beta           of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.2.conv3.norm.running_mean    loaded from res3_2_branch2c_bn_running_mean   of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.2.conv3.norm.running_var     loaded from res3_2_branch2c_bn_running_var    of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.2.conv3.norm.weight          loaded from res3_2_branch2c_bn_gamma          of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.2.conv3.weight               loaded from res3_2_branch2c_w                 of shape (512, 128, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.3.conv1.norm.bias            loaded from res3_3_branch2a_bn_beta           of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.3.conv1.norm.running_mean    loaded from res3_3_branch2a_bn_running_mean   of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.3.conv1.norm.running_var     loaded from res3_3_branch2a_bn_running_var    of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.3.conv1.norm.weight          loaded from res3_3_branch2a_bn_gamma          of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.3.conv1.weight               loaded from res3_3_branch2a_w                 of shape (128, 512, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.3.conv2.norm.bias            loaded from res3_3_branch2b_bn_beta           of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.3.conv2.norm.running_mean    loaded from res3_3_branch2b_bn_running_mean   of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.3.conv2.norm.running_var     loaded from res3_3_branch2b_bn_running_var    of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.3.conv2.norm.weight          loaded from res3_3_branch2b_bn_gamma          of shape (128,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.3.conv2.weight               loaded from res3_3_branch2b_w                 of shape (128, 128, 3, 3)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.3.conv3.norm.bias            loaded from res3_3_branch2c_bn_beta           of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.3.conv3.norm.running_mean    loaded from res3_3_branch2c_bn_running_mean   of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.3.conv3.norm.running_var     loaded from res3_3_branch2c_bn_running_var    of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.3.conv3.norm.weight          loaded from res3_3_branch2c_bn_gamma          of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res3.3.conv3.weight               loaded from res3_3_branch2c_w                 of shape (512, 128, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.conv1.norm.bias            loaded from res4_0_branch2a_bn_beta           of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.conv1.norm.running_mean    loaded from res4_0_branch2a_bn_running_mean   of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.conv1.norm.running_var     loaded from res4_0_branch2a_bn_running_var    of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.conv1.norm.weight          loaded from res4_0_branch2a_bn_gamma          of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.conv1.weight               loaded from res4_0_branch2a_w                 of shape (256, 512, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.conv2.norm.bias            loaded from res4_0_branch2b_bn_beta           of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.conv2.norm.running_mean    loaded from res4_0_branch2b_bn_running_mean   of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.conv2.norm.running_var     loaded from res4_0_branch2b_bn_running_var    of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.conv2.norm.weight          loaded from res4_0_branch2b_bn_gamma          of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.conv2.weight               loaded from res4_0_branch2b_w                 of shape (256, 256, 3, 3)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.conv3.norm.bias            loaded from res4_0_branch2c_bn_beta           of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.conv3.norm.running_mean    loaded from res4_0_branch2c_bn_running_mean   of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.conv3.norm.running_var     loaded from res4_0_branch2c_bn_running_var    of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.conv3.norm.weight          loaded from res4_0_branch2c_bn_gamma          of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.conv3.weight               loaded from res4_0_branch2c_w                 of shape (1024, 256, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.shortcut.norm.bias         loaded from res4_0_branch1_bn_beta            of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.shortcut.norm.running_mean loaded from res4_0_branch1_bn_running_mean    of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.shortcut.norm.running_var  loaded from res4_0_branch1_bn_running_var     of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.shortcut.norm.weight       loaded from res4_0_branch1_bn_gamma           of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.0.shortcut.weight            loaded from res4_0_branch1_w                  of shape (1024, 512, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.1.conv1.norm.bias            loaded from res4_1_branch2a_bn_beta           of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.1.conv1.norm.running_mean    loaded from res4_1_branch2a_bn_running_mean   of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.1.conv1.norm.running_var     loaded from res4_1_branch2a_bn_running_var    of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.1.conv1.norm.weight          loaded from res4_1_branch2a_bn_gamma          of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.1.conv1.weight               loaded from res4_1_branch2a_w                 of shape (256, 1024, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.1.conv2.norm.bias            loaded from res4_1_branch2b_bn_beta           of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.1.conv2.norm.running_mean    loaded from res4_1_branch2b_bn_running_mean   of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.1.conv2.norm.running_var     loaded from res4_1_branch2b_bn_running_var    of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.1.conv2.norm.weight          loaded from res4_1_branch2b_bn_gamma          of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.1.conv2.weight               loaded from res4_1_branch2b_w                 of shape (256, 256, 3, 3)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.1.conv3.norm.bias            loaded from res4_1_branch2c_bn_beta           of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.1.conv3.norm.running_mean    loaded from res4_1_branch2c_bn_running_mean   of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.1.conv3.norm.running_var     loaded from res4_1_branch2c_bn_running_var    of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.1.conv3.norm.weight          loaded from res4_1_branch2c_bn_gamma          of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.1.conv3.weight               loaded from res4_1_branch2c_w                 of shape (1024, 256, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.2.conv1.norm.bias            loaded from res4_2_branch2a_bn_beta           of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.2.conv1.norm.running_mean    loaded from res4_2_branch2a_bn_running_mean   of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.2.conv1.norm.running_var     loaded from res4_2_branch2a_bn_running_var    of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.2.conv1.norm.weight          loaded from res4_2_branch2a_bn_gamma          of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.2.conv1.weight               loaded from res4_2_branch2a_w                 of shape (256, 1024, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.2.conv2.norm.bias            loaded from res4_2_branch2b_bn_beta           of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.2.conv2.norm.running_mean    loaded from res4_2_branch2b_bn_running_mean   of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.2.conv2.norm.running_var     loaded from res4_2_branch2b_bn_running_var    of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.2.conv2.norm.weight          loaded from res4_2_branch2b_bn_gamma          of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.2.conv2.weight               loaded from res4_2_branch2b_w                 of shape (256, 256, 3, 3)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.2.conv3.norm.bias            loaded from res4_2_branch2c_bn_beta           of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.2.conv3.norm.running_mean    loaded from res4_2_branch2c_bn_running_mean   of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.2.conv3.norm.running_var     loaded from res4_2_branch2c_bn_running_var    of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.2.conv3.norm.weight          loaded from res4_2_branch2c_bn_gamma          of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.2.conv3.weight               loaded from res4_2_branch2c_w                 of shape (1024, 256, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.3.conv1.norm.bias            loaded from res4_3_branch2a_bn_beta           of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.3.conv1.norm.running_mean    loaded from res4_3_branch2a_bn_running_mean   of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.3.conv1.norm.running_var     loaded from res4_3_branch2a_bn_running_var    of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.3.conv1.norm.weight          loaded from res4_3_branch2a_bn_gamma          of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.3.conv1.weight               loaded from res4_3_branch2a_w                 of shape (256, 1024, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.3.conv2.norm.bias            loaded from res4_3_branch2b_bn_beta           of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.3.conv2.norm.running_mean    loaded from res4_3_branch2b_bn_running_mean   of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.3.conv2.norm.running_var     loaded from res4_3_branch2b_bn_running_var    of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.3.conv2.norm.weight          loaded from res4_3_branch2b_bn_gamma          of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.3.conv2.weight               loaded from res4_3_branch2b_w                 of shape (256, 256, 3, 3)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.3.conv3.norm.bias            loaded from res4_3_branch2c_bn_beta           of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.3.conv3.norm.running_mean    loaded from res4_3_branch2c_bn_running_mean   of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.3.conv3.norm.running_var     loaded from res4_3_branch2c_bn_running_var    of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.3.conv3.norm.weight          loaded from res4_3_branch2c_bn_gamma          of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.3.conv3.weight               loaded from res4_3_branch2c_w                 of shape (1024, 256, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.4.conv1.norm.bias            loaded from res4_4_branch2a_bn_beta           of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.4.conv1.norm.running_mean    loaded from res4_4_branch2a_bn_running_mean   of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.4.conv1.norm.running_var     loaded from res4_4_branch2a_bn_running_var    of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.4.conv1.norm.weight          loaded from res4_4_branch2a_bn_gamma          of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.4.conv1.weight               loaded from res4_4_branch2a_w                 of shape (256, 1024, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.4.conv2.norm.bias            loaded from res4_4_branch2b_bn_beta           of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.4.conv2.norm.running_mean    loaded from res4_4_branch2b_bn_running_mean   of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.4.conv2.norm.running_var     loaded from res4_4_branch2b_bn_running_var    of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.4.conv2.norm.weight          loaded from res4_4_branch2b_bn_gamma          of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.4.conv2.weight               loaded from res4_4_branch2b_w                 of shape (256, 256, 3, 3)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.4.conv3.norm.bias            loaded from res4_4_branch2c_bn_beta           of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.4.conv3.norm.running_mean    loaded from res4_4_branch2c_bn_running_mean   of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.4.conv3.norm.running_var     loaded from res4_4_branch2c_bn_running_var    of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.4.conv3.norm.weight          loaded from res4_4_branch2c_bn_gamma          of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.4.conv3.weight               loaded from res4_4_branch2c_w                 of shape (1024, 256, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.5.conv1.norm.bias            loaded from res4_5_branch2a_bn_beta           of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.5.conv1.norm.running_mean    loaded from res4_5_branch2a_bn_running_mean   of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.5.conv1.norm.running_var     loaded from res4_5_branch2a_bn_running_var    of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.5.conv1.norm.weight          loaded from res4_5_branch2a_bn_gamma          of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.5.conv1.weight               loaded from res4_5_branch2a_w                 of shape (256, 1024, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.5.conv2.norm.bias            loaded from res4_5_branch2b_bn_beta           of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.5.conv2.norm.running_mean    loaded from res4_5_branch2b_bn_running_mean   of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.5.conv2.norm.running_var     loaded from res4_5_branch2b_bn_running_var    of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.5.conv2.norm.weight          loaded from res4_5_branch2b_bn_gamma          of shape (256,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.5.conv2.weight               loaded from res4_5_branch2b_w                 of shape (256, 256, 3, 3)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.5.conv3.norm.bias            loaded from res4_5_branch2c_bn_beta           of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.5.conv3.norm.running_mean    loaded from res4_5_branch2c_bn_running_mean   of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.5.conv3.norm.running_var     loaded from res4_5_branch2c_bn_running_var    of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.5.conv3.norm.weight          loaded from res4_5_branch2c_bn_gamma          of shape (1024,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res4.5.conv3.weight               loaded from res4_5_branch2c_w                 of shape (1024, 256, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.conv1.norm.bias            loaded from res5_0_branch2a_bn_beta           of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.conv1.norm.running_mean    loaded from res5_0_branch2a_bn_running_mean   of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.conv1.norm.running_var     loaded from res5_0_branch2a_bn_running_var    of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.conv1.norm.weight          loaded from res5_0_branch2a_bn_gamma          of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.conv1.weight               loaded from res5_0_branch2a_w                 of shape (512, 1024, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.conv2.norm.bias            loaded from res5_0_branch2b_bn_beta           of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.conv2.norm.running_mean    loaded from res5_0_branch2b_bn_running_mean   of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.conv2.norm.running_var     loaded from res5_0_branch2b_bn_running_var    of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.conv2.norm.weight          loaded from res5_0_branch2b_bn_gamma          of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.conv2.weight               loaded from res5_0_branch2b_w                 of shape (512, 512, 3, 3)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.conv3.norm.bias            loaded from res5_0_branch2c_bn_beta           of shape (2048,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.conv3.norm.running_mean    loaded from res5_0_branch2c_bn_running_mean   of shape (2048,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.conv3.norm.running_var     loaded from res5_0_branch2c_bn_running_var    of shape (2048,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.conv3.norm.weight          loaded from res5_0_branch2c_bn_gamma          of shape (2048,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.conv3.weight               loaded from res5_0_branch2c_w                 of shape (2048, 512, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.shortcut.norm.bias         loaded from res5_0_branch1_bn_beta            of shape (2048,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.shortcut.norm.running_mean loaded from res5_0_branch1_bn_running_mean    of shape (2048,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.shortcut.norm.running_var  loaded from res5_0_branch1_bn_running_var     of shape (2048,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.shortcut.norm.weight       loaded from res5_0_branch1_bn_gamma           of shape (2048,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.0.shortcut.weight            loaded from res5_0_branch1_w                  of shape (2048, 1024, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.1.conv1.norm.bias            loaded from res5_1_branch2a_bn_beta           of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.1.conv1.norm.running_mean    loaded from res5_1_branch2a_bn_running_mean   of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.1.conv1.norm.running_var     loaded from res5_1_branch2a_bn_running_var    of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.1.conv1.norm.weight          loaded from res5_1_branch2a_bn_gamma          of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.1.conv1.weight               loaded from res5_1_branch2a_w                 of shape (512, 2048, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.1.conv2.norm.bias            loaded from res5_1_branch2b_bn_beta           of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.1.conv2.norm.running_mean    loaded from res5_1_branch2b_bn_running_mean   of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.1.conv2.norm.running_var     loaded from res5_1_branch2b_bn_running_var    of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.1.conv2.norm.weight          loaded from res5_1_branch2b_bn_gamma          of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.1.conv2.weight               loaded from res5_1_branch2b_w                 of shape (512, 512, 3, 3)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.1.conv3.norm.bias            loaded from res5_1_branch2c_bn_beta           of shape (2048,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.1.conv3.norm.running_mean    loaded from res5_1_branch2c_bn_running_mean   of shape (2048,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.1.conv3.norm.running_var     loaded from res5_1_branch2c_bn_running_var    of shape (2048,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.1.conv3.norm.weight          loaded from res5_1_branch2c_bn_gamma          of shape (2048,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.1.conv3.weight               loaded from res5_1_branch2c_w                 of shape (2048, 512, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.2.conv1.norm.bias            loaded from res5_2_branch2a_bn_beta           of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.2.conv1.norm.running_mean    loaded from res5_2_branch2a_bn_running_mean   of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.2.conv1.norm.running_var     loaded from res5_2_branch2a_bn_running_var    of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.2.conv1.norm.weight          loaded from res5_2_branch2a_bn_gamma          of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.2.conv1.weight               loaded from res5_2_branch2a_w                 of shape (512, 2048, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.2.conv2.norm.bias            loaded from res5_2_branch2b_bn_beta           of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.2.conv2.norm.running_mean    loaded from res5_2_branch2b_bn_running_mean   of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.2.conv2.norm.running_var     loaded from res5_2_branch2b_bn_running_var    of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.2.conv2.norm.weight          loaded from res5_2_branch2b_bn_gamma          of shape (512,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.2.conv2.weight               loaded from res5_2_branch2b_w                 of shape (512, 512, 3, 3)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.2.conv3.norm.bias            loaded from res5_2_branch2c_bn_beta           of shape (2048,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.2.conv3.norm.running_mean    loaded from res5_2_branch2c_bn_running_mean   of shape (2048,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.2.conv3.norm.running_var     loaded from res5_2_branch2c_bn_running_var    of shape (2048,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.2.conv3.norm.weight          loaded from res5_2_branch2c_bn_gamma          of shape (2048,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.res5.2.conv3.weight               loaded from res5_2_branch2c_w                 of shape (2048, 512, 1, 1)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.stem.conv1.norm.bias              loaded from res_conv1_bn_beta                 of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.stem.conv1.norm.running_mean      loaded from res_conv1_bn_running_mean         of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.stem.conv1.norm.running_var       loaded from res_conv1_bn_running_var          of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.stem.conv1.norm.weight            loaded from res_conv1_bn_gamma                of shape (64,)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: backbone.bottom_up.stem.conv1.weight                 loaded from conv1_w                           of shape (64, 3, 7, 7)
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: Some model parameters or buffers are not found in the checkpoint:
  backbone.fpn_lateral2.{bias, weight}
  backbone.fpn_lateral3.{bias, weight}
  backbone.fpn_lateral4.{bias, weight}
  backbone.fpn_lateral5.{bias, weight}
  backbone.fpn_output2.{bias, weight}
  backbone.fpn_output3.{bias, weight}
  backbone.fpn_output4.{bias, weight}
  backbone.fpn_output5.{bias, weight}
  pixel_mean
  pixel_std
  proposal_generator.anchor_generator.cell_anchors.{0, 1, 2, 3, 4}
  proposal_generator.rpn_head.anchor_deltas.{bias, weight}
  proposal_generator.rpn_head.conv.{bias, weight}
  proposal_generator.rpn_head.objectness_logits.{bias, weight}
  roi_heads.box_head.fc1.{bias, weight}
  roi_heads.box_head.fc2.{bias, weight}
  roi_heads.box_predictor.bbox_pred.{bias, weight}
  roi_heads.box_predictor.cls_score.{bias, weight}
[11/26 15:46:24 d2.checkpoint.c2_model_loading]: The checkpoint state_dict contains keys that are not used by the model:
  fc1000_b
  fc1000_w
  conv1_b
[11/26 15:46:25 d2.data.datasets.coco]: Loaded 2501 images in COCO format from protocol/custom_protocols/custom_voc_2007_train.json
[11/26 15:46:26 d2.data.datasets.coco]: Loaded 2510 images in COCO format from protocol/custom_protocols/custom_voc_2007_val.json
[11/26 15:46:26 d2.data.datasets.coco]: Loaded 5717 images in COCO format from protocol/custom_protocols/custom_voc_2012_train.json
[11/26 15:46:26 d2.data.datasets.coco]: Loaded 5823 images in COCO format from protocol/custom_protocols/custom_voc_2012_val.json
[11/26 15:46:26 d2.data.build]: Removed 0 images with no usable annotations. 16551 images left.
[11/26 15:46:26 d2.data.build]: Distribution of instances among all 20 categories:
|  category   | #instances   |  category   | #instances   |  category  | #instances   |
|:-----------:|:-------------|:-----------:|:-------------|:----------:|:-------------|
|  aeroplane  | 1285         |   bicycle   | 1208         |    bird    | 1820         |
|    boat     | 1397         |   bottle    | 2116         |    bus     | 909          |
|     car     | 4008         |     cat     | 1616         |   chair    | 4338         |
|     cow     | 1058         | diningtable | 1057         |    dog     | 2079         |
|    horse    | 1156         |  motorbike  | 1141         |   person   | 15576        |
| pottedplant | 1724         |    sheep    | 1347         |    sofa    | 1211         |
|    train    | 984          |  tvmonitor  | 1193         |            |              |
|    total    | 47223        |             |              |            |              |
[11/26 15:46:26 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()]
[11/26 15:46:26 d2.data.build]: Using training sampler TrainingSampler
[11/26 15:46:26 d2.data.common]: Serializing 16551 elements to byte tensors and concatenating them all ...
[11/26 15:46:27 d2.data.common]: Serialized dataset takes 6.17 MiB
[11/26 15:46:27 detectron2]: Starting training from iteration 0
[11/26 15:46:45 d2.utils.events]:  iter: 20  total_loss: 0.7846  loss_box_reg: 0.02849  loss_cls: 0.2678  loss_rpn_cls: 0.5485  loss_rpn_loc: 0.04709  lr: 0.0019081  max_mem: 5120M
[11/26 22:08:10 d2.utils.events]:  eta: 0:00:11  iter: 35980  total_loss: 0.1699  loss_box_reg: 0.1006  loss_cls: 0.05196  loss_rpn_cls: 0.002539  loss_rpn_loc: 0.01681  lr: 0.0001  max_mem: 5120M
[11/26 22:08:22 fvcore.common.checkpoint]: Saving checkpoint to /ssd_scratch/cvit/dksingh/overlooked_elephant/fasterrcnn/fasterrcnn_36k_iters/model_final.pth
[11/26 22:08:24 fvcore.common.checkpoint]: Saving checkpoint to /ssd_scratch/cvit/dksingh/overlooked_elephant/fasterrcnn/fasterrcnn_36k_iters/model_final.pth
/home/dksingh/inseg/detectron2/detectron2/modeling/roi_heads/fast_rcnn.py:217: UserWarning: This overload of nonzero is deprecated:
    nonzero()
Consider using one of the following signatures instead:
    nonzero(*, bool as_tuple) (Triggered internally at  /opt/conda/conda-bld/pytorch_1595629427478/work/torch/csrc/utils/python_arg_parser.cpp:766.)
  num_fg = fg_inds.nonzero().numel()
/home/dksingh/inseg/detectron2/detectron2/modeling/roi_heads/fast_rcnn.py:217: UserWarning: This overload of nonzero is deprecated:
    nonzero()
Consider using one of the following signatures instead:
    nonzero(*, bool as_tuple) (Triggered internally at  /opt/conda/conda-bld/pytorch_1595629427478/work/torch/csrc/utils/python_arg_parser.cpp:766.)
  num_fg = fg_inds.nonzero().numel()
/home/dksingh/inseg/detectron2/detectron2/modeling/roi_heads/fast_rcnn.py:217: UserWarning: This overload of nonzero is deprecated:
    nonzero()
Consider using one of the following signatures instead:
    nonzero(*, bool as_tuple) (Triggered internally at  /opt/conda/conda-bld/pytorch_1595629427478/work/torch/csrc/utils/python_arg_parser.cpp:766.)
  num_fg = fg_inds.nonzero().numel()
WARNING [11/26 22:08:26 d2.data.datasets.coco]: 
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

[11/26 22:08:26 d2.data.datasets.coco]: Loaded 4952 images in COCO format from protocol/custom_protocols/custom_voc_2007_test.json
[11/26 22:08:26 d2.data.build]: Distribution of instances among all 21 categories:
|  category  | #instances   |  category   | #instances   |  category   | #instances   |
|:----------:|:-------------|:-----------:|:-------------|:-----------:|:-------------|
|  unknown   | 0            |  aeroplane  | 311          |   bicycle   | 389          |
|    bird    | 576          |    boat     | 393          |   bottle    | 657          |
|    bus     | 254          |     car     | 1541         |     cat     | 370          |
|   chair    | 1374         |     cow     | 329          | diningtable | 299          |
|    dog     | 530          |    horse    | 395          |  motorbike  | 369          |
|   person   | 5227         | pottedplant | 592          |    sheep    | 311          |
|    sofa    | 396          |    train    | 302          |  tvmonitor  | 361          |
|            |              |             |              |             |              |
|   total    | 14976        |             |              |             |              |
[11/26 22:08:26 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[11/26 22:08:26 d2.data.common]: Serializing 4952 elements to byte tensors and concatenating them all ...
[11/26 22:08:26 d2.data.common]: Serialized dataset takes 1.87 MiB
[11/26 22:08:26 d2.evaluation.evaluator]: Start inference on 1238 images
/home/dksingh/inseg/detectron2/detectron2/modeling/roi_heads/fast_rcnn.py:217: UserWarning: This overload of nonzero is deprecated:
    nonzero()
Consider using one of the following signatures instead:
    nonzero(*, bool as_tuple) (Triggered internally at  /opt/conda/conda-bld/pytorch_1595629427478/work/torch/csrc/utils/python_arg_parser.cpp:766.)
  num_fg = fg_inds.nonzero().numel()
[11/26 22:08:33 d2.evaluation.evaluator]: Inference done 11/1238. 0.0654 s / img. ETA=0:01:22
[11/26 22:08:38 d2.evaluation.evaluator]: Inference done 86/1238. 0.0650 s / img. ETA=0:01:17
[11/26 22:08:43 d2.evaluation.evaluator]: Inference done 162/1238. 0.0644 s / img. ETA=0:01:11
[11/26 22:08:48 d2.evaluation.evaluator]: Inference done 238/1238. 0.0642 s / img. ETA=0:01:06
[11/26 22:08:53 d2.evaluation.evaluator]: Inference done 315/1238. 0.0640 s / img. ETA=0:01:01
[11/26 22:08:58 d2.evaluation.evaluator]: Inference done 390/1238. 0.0641 s / img. ETA=0:00:56
[11/26 22:09:03 d2.evaluation.evaluator]: Inference done 466/1238. 0.0641 s / img. ETA=0:00:51
[11/26 22:09:08 d2.evaluation.evaluator]: Inference done 543/1238. 0.0640 s / img. ETA=0:00:46
[11/26 22:09:13 d2.evaluation.evaluator]: Inference done 619/1238. 0.0640 s / img. ETA=0:00:40
[11/26 22:09:18 d2.evaluation.evaluator]: Inference done 695/1238. 0.0640 s / img. ETA=0:00:35
[11/26 22:09:23 d2.evaluation.evaluator]: Inference done 771/1238. 0.0640 s / img. ETA=0:00:30
[11/26 22:09:28 d2.evaluation.evaluator]: Inference done 847/1238. 0.0640 s / img. ETA=0:00:25
[11/26 22:09:33 d2.evaluation.evaluator]: Inference done 923/1238. 0.0640 s / img. ETA=0:00:20
[11/26 22:09:38 d2.evaluation.evaluator]: Inference done 998/1238. 0.0641 s / img. ETA=0:00:15
[11/26 22:09:43 d2.evaluation.evaluator]: Inference done 1074/1238. 0.0641 s / img. ETA=0:00:10
[11/26 22:09:48 d2.evaluation.evaluator]: Inference done 1150/1238. 0.0641 s / img. ETA=0:00:05
[11/26 22:09:53 d2.evaluation.evaluator]: Inference done 1226/1238. 0.0641 s / img. ETA=0:00:00
[11/26 22:09:54 d2.evaluation.evaluator]: Total inference time: 0:01:22.036135 (0.066534 s / img per device, on 4 devices)
[11/26 22:09:54 d2.evaluation.evaluator]: Total inference pure compute time: 0:01:19 (0.064101 s / img per device, on 4 devices)
[11/26 22:10:05 detectron2]: Image level evaluation complete for custom_voc_2007_test
[11/26 22:10:05 detectron2]: Results for custom_voc_2007_test
Traceback (most recent call last):
  File "main.py", line 199, in <module>
    args=(args,),
  File "/home/dksingh/inseg/detectron2/detectron2/engine/launch.py", line 59, in launch
    daemon=False,
  File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 200, in spawn
    return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
  File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
    while not context.join():
  File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 119, in join
    raise Exception(msg)
Exception: 

-- Process 0 terminated with the following error:
Traceback (most recent call last):
  File "/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
    fn(i, *args)
  File "/home/dksingh/inseg/detectron2/detectron2/engine/launch.py", line 94, in _distributed_worker
    main_func(*args)
  File "/home/dksingh/paper_impl2/Elephant-of-object-detection/main.py", line 187, in main
    return do_test(cfg, model)
  File "/home/dksingh/paper_impl2/Elephant-of-object-detection/main.py", line 63, in do_test
    evaluator._coco_api.cats)
  File "/home/dksingh/paper_impl2/Elephant-of-object-detection/WIC.py", line 45, in only_mAP_analysis
    scores = torch.cat(scores)
RuntimeError: All input tensors must be on the same device. Received cuda:0 and cuda:3

So I performed the evaluation on a single GPU using the trained model, using the following command:

python main.py --num-gpus 1 --config-file training_configs/faster_rcnn_R_50_FPN.yaml --resume --eval-only

I think it produces 73.87 mAP on PASCAL test, and 67.32 mAP on WR1. Test log:

Command Line Args: Namespace(config_file='training_configs/faster_rcnn_R_50_FPN.yaml', dist_url='tcp://127.0.0.1:50712', eval_only=True, machine_rank=0, num_gpus=1, num_machines=1, opts=[], resume=True)
[11/26 22:12:53 detectron2]: Rank of current process: 0. World size: 1
[11/26 22:12:58 detectron2]: Environment info:
----------------------  -------------------------------------------------------------------------------
sys.platform            linux
Python                  3.7.3 | packaged by conda-forge | (default, Jul  1 2019, 21:52:21) [GCC 7.3.0]
numpy                   1.16.4
detectron2              0.3 @/home/dksingh/inseg/detectron2/detectron2
Compiler                GCC 5.5
CUDA compiler           CUDA 10.2
detectron2 arch flags   6.1
DETECTRON2_ENV_MODULE   <not set>
PyTorch                 1.6.0 @/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torch
PyTorch debug build     False
GPU available           True
GPU 0,1,2,3             GeForce GTX 1080 Ti (arch=6.1)
CUDA_HOME               /usr/local/cuda
Pillow                  7.1.2
torchvision             0.7.0 @/home/dksingh/anaconda3/envs/dev/lib/python3.7/site-packages/torchvision
torchvision arch flags  3.5, 5.0, 6.0, 7.0, 7.5
fvcore                  0.1.2.post20201103
cv2                     4.1.0
----------------------  -------------------------------------------------------------------------------
PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2019.0.4 Product Build 20190411 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 10.2
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
  - CuDNN 7.6.5
  - Magma 2.5.2
  - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF, 

[11/26 22:12:58 detectron2]: Command line arguments: Namespace(config_file='training_configs/faster_rcnn_R_50_FPN.yaml', dist_url='tcp://127.0.0.1:50712', eval_only=True, machine_rank=0, num_gpus=1, num_machines=1, opts=[], resume=True)
[11/26 22:12:58 detectron2]: Contents of args.config_file=training_configs/faster_rcnn_R_50_FPN.yaml:
# Configuration for training with 4 gpus
_BASE_: "~/detectron2/detectron2/configs/Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: False
  RESNETS:
    DEPTH: 50
  ROI_HEADS:
    NUM_CLASSES: 20
DATASETS:
  TRAIN: ('custom_voc_2007_train','custom_voc_2007_val','custom_voc_2012_train','custom_voc_2012_val',)
  TEST: ('custom_voc_2007_test','WR1_Mixed_Unknowns')
#  TEST: ('custom_voc_2007_test','Mixed_Unknowns')

SOLVER:
  BASE_LR: 0.01
  STEPS: (24000, 32000)
  MAX_ITER: 36000
  WARMUP_ITERS: 100
OUTPUT_DIR: /ssd_scratch/cvit/dksingh/overlooked_elephant/fasterrcnn/fasterrcnn_36k_iters/
[11/26 22:12:58 detectron2]: Running with full config:
CUDNN_BENCHMARK: False
DATALOADER:
  ASPECT_RATIO_GROUPING: True
  FILTER_EMPTY_ANNOTATIONS: True
  NUM_WORKERS: 4
  REPEAT_THRESHOLD: 0.0
  SAMPLER_TRAIN: TrainingSampler
DATASETS:
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
  PRECOMPUTED_PROPOSAL_TOPK_TRAIN: 2000
  PROPOSAL_FILES_TEST: ()
  PROPOSAL_FILES_TRAIN: ()
  TEST: ('custom_voc_2007_test', 'WR1_Mixed_Unknowns')
  TRAIN: ('custom_voc_2007_train', 'custom_voc_2007_val', 'custom_voc_2012_train', 'custom_voc_2012_val')
GLOBAL:
  HACK: 1.0
INPUT:
  CROP:
    ENABLED: False
    SIZE: [0.9, 0.9]
    TYPE: relative_range
  FORMAT: BGR
  MASK_FORMAT: polygon
  MAX_SIZE_TEST: 1333
  MAX_SIZE_TRAIN: 1333
  MIN_SIZE_TEST: 800
  MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)
  MIN_SIZE_TRAIN_SAMPLING: choice
  RANDOM_FLIP: horizontal
MODEL:
  ANCHOR_GENERATOR:
    ANGLES: [[-90, 0, 90]]
    ASPECT_RATIOS: [[0.5, 1.0, 2.0]]
    NAME: DefaultAnchorGenerator
    OFFSET: 0.0
    SIZES: [[32], [64], [128], [256], [512]]
  BACKBONE:
    FREEZE_AT: 2
    NAME: build_resnet_fpn_backbone
  DEVICE: cuda
  FPN:
    FUSE_TYPE: sum
    IN_FEATURES: ['res2', 'res3', 'res4', 'res5']
    NORM: 
    OUT_CHANNELS: 256
  KEYPOINT_ON: False
  LOAD_PROPOSALS: False
  MASK_ON: False
  META_ARCHITECTURE: GeneralizedRCNN
  PANOPTIC_FPN:
    COMBINE:
      ENABLED: True
      INSTANCES_CONFIDENCE_THRESH: 0.5
      OVERLAP_THRESH: 0.5
      STUFF_AREA_LIMIT: 4096
    INSTANCE_LOSS_WEIGHT: 1.0
  PIXEL_MEAN: [103.53, 116.28, 123.675]
  PIXEL_STD: [1.0, 1.0, 1.0]
  PROPOSAL_GENERATOR:
    MIN_SIZE: 0
    NAME: RPN
  RESNETS:
    DEFORM_MODULATED: False
    DEFORM_NUM_GROUPS: 1
    DEFORM_ON_PER_STAGE: [False, False, False, False]
    DEPTH: 50
    NORM: FrozenBN
    NUM_GROUPS: 1
    OUT_FEATURES: ['res2', 'res3', 'res4', 'res5']
    RES2_OUT_CHANNELS: 256
    RES5_DILATION: 1
    STEM_OUT_CHANNELS: 64
    STRIDE_IN_1X1: True
    WIDTH_PER_GROUP: 64
  RETINANET:
    BBOX_REG_LOSS_TYPE: smooth_l1
    BBOX_REG_WEIGHTS: (1.0, 1.0, 1.0, 1.0)
    FOCAL_LOSS_ALPHA: 0.25
    FOCAL_LOSS_GAMMA: 2.0
    IN_FEATURES: ['p3', 'p4', 'p5', 'p6', 'p7']
    IOU_LABELS: [0, -1, 1]
    IOU_THRESHOLDS: [0.4, 0.5]
    NMS_THRESH_TEST: 0.5
    NORM: 
    NUM_CLASSES: 80
    NUM_CONVS: 4
    PRIOR_PROB: 0.01
    SCORE_THRESH_TEST: 0.05
    SMOOTH_L1_LOSS_BETA: 0.1
    TOPK_CANDIDATES_TEST: 1000
  ROI_BOX_CASCADE_HEAD:
    BBOX_REG_WEIGHTS: ((10.0, 10.0, 5.0, 5.0), (20.0, 20.0, 10.0, 10.0), (30.0, 30.0, 15.0, 15.0))
    IOUS: (0.5, 0.6, 0.7)
  ROI_BOX_HEAD:
    BBOX_REG_LOSS_TYPE: smooth_l1
    BBOX_REG_LOSS_WEIGHT: 1.0
    BBOX_REG_WEIGHTS: (10.0, 10.0, 5.0, 5.0)
    CLS_AGNOSTIC_BBOX_REG: False
    CONV_DIM: 256
    FC_DIM: 1024
    NAME: FastRCNNConvFCHead
    NORM: 
    NUM_CONV: 0
    NUM_FC: 2
    POOLER_RESOLUTION: 7
    POOLER_SAMPLING_RATIO: 0
    POOLER_TYPE: ROIAlignV2
    SMOOTH_L1_BETA: 0.0
    TRAIN_ON_PRED_BOXES: False
  ROI_HEADS:
    BATCH_SIZE_PER_IMAGE: 512
    IN_FEATURES: ['p2', 'p3', 'p4', 'p5']
    IOU_LABELS: [0, 1]
    IOU_THRESHOLDS: [0.5]
    NAME: StandardROIHeads
    NMS_THRESH_TEST: 0.5
    NUM_CLASSES: 20
    POSITIVE_FRACTION: 0.25
    PROPOSAL_APPEND_GT: True
    SCORE_THRESH_TEST: 0.05
  ROI_KEYPOINT_HEAD:
    CONV_DIMS: (512, 512, 512, 512, 512, 512, 512, 512)
    LOSS_WEIGHT: 1.0
    MIN_KEYPOINTS_PER_IMAGE: 1
    NAME: KRCNNConvDeconvUpsampleHead
    NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS: True
    NUM_KEYPOINTS: 17
    POOLER_RESOLUTION: 14
    POOLER_SAMPLING_RATIO: 0
    POOLER_TYPE: ROIAlignV2
  ROI_MASK_HEAD:
    CLS_AGNOSTIC_MASK: False
    CONV_DIM: 256
    NAME: MaskRCNNConvUpsampleHead
    NORM: 
    NUM_CONV: 4
    POOLER_RESOLUTION: 14
    POOLER_SAMPLING_RATIO: 0
    POOLER_TYPE: ROIAlignV2
  RPN:
    BATCH_SIZE_PER_IMAGE: 256
    BBOX_REG_LOSS_TYPE: smooth_l1
    BBOX_REG_LOSS_WEIGHT: 1.0
    BBOX_REG_WEIGHTS: (1.0, 1.0, 1.0, 1.0)
    BOUNDARY_THRESH: -1
    HEAD_NAME: StandardRPNHead
    IN_FEATURES: ['p2', 'p3', 'p4', 'p5', 'p6']
    IOU_LABELS: [0, -1, 1]
    IOU_THRESHOLDS: [0.3, 0.7]
    LOSS_WEIGHT: 1.0
    NMS_THRESH: 0.7
    POSITIVE_FRACTION: 0.5
    POST_NMS_TOPK_TEST: 1000
    POST_NMS_TOPK_TRAIN: 1000
    PRE_NMS_TOPK_TEST: 1000
    PRE_NMS_TOPK_TRAIN: 2000
    SMOOTH_L1_BETA: 0.0
  SEM_SEG_HEAD:
    COMMON_STRIDE: 4
    CONVS_DIM: 128
    IGNORE_VALUE: 255
    IN_FEATURES: ['p2', 'p3', 'p4', 'p5']
    LOSS_WEIGHT: 1.0
    NAME: SemSegFPNHead
    NORM: GN
    NUM_CLASSES: 54
  WEIGHTS: detectron2://ImageNetPretrained/MSRA/R-50.pkl
OUTPUT_DIR: /ssd_scratch/cvit/dksingh/overlooked_elephant/fasterrcnn/fasterrcnn_36k_iters/
SEED: -1
SOLVER:
  AMP:
    ENABLED: False
  BASE_LR: 0.01
  BIAS_LR_FACTOR: 1.0
  CHECKPOINT_PERIOD: 5000
  CLIP_GRADIENTS:
    CLIP_TYPE: value
    CLIP_VALUE: 1.0
    ENABLED: False
    NORM_TYPE: 2.0
  GAMMA: 0.1
  IMS_PER_BATCH: 16
  LR_SCHEDULER_NAME: WarmupMultiStepLR
  MAX_ITER: 36000
  MOMENTUM: 0.9
  NESTEROV: False
  REFERENCE_WORLD_SIZE: 0
  STEPS: (24000, 32000)
  WARMUP_FACTOR: 0.001
  WARMUP_ITERS: 100
  WARMUP_METHOD: linear
  WEIGHT_DECAY: 0.0001
  WEIGHT_DECAY_BIAS: 0.0001
  WEIGHT_DECAY_NORM: 0.0
TEST:
  AUG:
    ENABLED: False
    FLIP: True
    MAX_SIZE: 4000
    MIN_SIZES: (400, 500, 600, 700, 800, 900, 1000, 1100, 1200)
  DETECTIONS_PER_IMAGE: 100
  EVAL_PERIOD: 0
  EXPECTED_RESULTS: []
  KEYPOINT_OKS_SIGMAS: []
  PRECISE_BN:
    ENABLED: False
    NUM_ITER: 200
VERSION: 2
VIS_PERIOD: 0
[11/26 22:12:58 detectron2]: Full config saved to /ssd_scratch/cvit/dksingh/overlooked_elephant/fasterrcnn/fasterrcnn_36k_iters/config.yaml
[11/26 22:12:58 d2.utils.env]: Using a generated random seed 58088618
[11/26 22:13:07 detectron2]: Model:
GeneralizedRCNN(
  (backbone): FPN(
    (fpn_lateral2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (top_block): LastLevelMaxPool()
    (bottom_up): ResNet(
      (stem): BasicStem(
        (conv1): Conv2d(
          3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
      )
      (res2): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv1): Conv2d(
            64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv2): Conv2d(
            64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv3): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv2): Conv2d(
            64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv3): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv2): Conv2d(
            64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv3): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
        )
      )
      (res3): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv1): Conv2d(
            256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
        (3): BottleneckBlock(
          (conv1): Conv2d(
            512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
      )
      (res4): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
          (conv1): Conv2d(
            512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (3): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (4): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (5): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
      )
      (res5): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
          (conv1): Conv2d(
            1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv2): Conv2d(
            512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv3): Conv2d(
            512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv2): Conv2d(
            512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv3): Conv2d(
            512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv2): Conv2d(
            512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv3): Conv2d(
            512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
        )
      )
    )
  )
  (proposal_generator): RPN(
    (rpn_head): StandardRPNHead(
      (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (objectness_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1))
      (anchor_deltas): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1))
    )
    (anchor_generator): DefaultAnchorGenerator(
      (cell_anchors): BufferList()
    )
  )
  (roi_heads): StandardROIHeads(
    (box_pooler): ROIPooler(
      (level_poolers): ModuleList(
        (0): ROIAlign(output_size=(7, 7), spatial_scale=0.25, sampling_ratio=0, aligned=True)
        (1): ROIAlign(output_size=(7, 7), spatial_scale=0.125, sampling_ratio=0, aligned=True)
        (2): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True)
        (3): ROIAlign(output_size=(7, 7), spatial_scale=0.03125, sampling_ratio=0, aligned=True)
      )
    )
    (box_head): FastRCNNConvFCHead(
      (flatten): Flatten()
      (fc1): Linear(in_features=12544, out_features=1024, bias=True)
      (fc_relu1): ReLU()
      (fc2): Linear(in_features=1024, out_features=1024, bias=True)
      (fc_relu2): ReLU()
    )
    (box_predictor): FastRCNNOutputLayers(
      (cls_score): Linear(in_features=1024, out_features=21, bias=True)
      (bbox_pred): Linear(in_features=1024, out_features=80, bias=True)
    )
  )
)
[11/26 22:13:07 fvcore.common.checkpoint]: Loading checkpoint from /ssd_scratch/cvit/dksingh/overlooked_elephant/fasterrcnn/fasterrcnn_36k_iters/model_final.pth
WARNING [11/26 22:13:08 d2.data.datasets.coco]: 
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

[11/26 22:13:08 d2.data.datasets.coco]: Loaded 4952 images in COCO format from protocol/custom_protocols/custom_voc_2007_test.json
[11/26 22:13:08 d2.data.build]: Distribution of instances among all 21 categories:
|  category  | #instances   |  category   | #instances   |  category   | #instances   |
|:----------:|:-------------|:-----------:|:-------------|:-----------:|:-------------|
|  unknown   | 0            |  aeroplane  | 311          |   bicycle   | 389          |
|    bird    | 576          |    boat     | 393          |   bottle    | 657          |
|    bus     | 254          |     car     | 1541         |     cat     | 370          |
|   chair    | 1374         |     cow     | 329          | diningtable | 299          |
|    dog     | 530          |    horse    | 395          |  motorbike  | 369          |
|   person   | 5227         | pottedplant | 592          |    sheep    | 311          |
|    sofa    | 396          |    train    | 302          |  tvmonitor  | 361          |
|            |              |             |              |             |              |
|   total    | 14976        |             |              |             |              |
[11/26 22:13:08 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[11/26 22:13:08 d2.data.common]: Serializing 4952 elements to byte tensors and concatenating them all ...
[11/26 22:13:08 d2.data.common]: Serialized dataset takes 1.87 MiB
[11/26 22:13:08 d2.evaluation.evaluator]: Start inference on 4952 images
[11/26 22:13:10 d2.evaluation.evaluator]: Inference done 11/4952. 0.0765 s / img. ETA=0:06:25
[11/26 22:13:15 d2.evaluation.evaluator]: Inference done 76/4952. 0.0761 s / img. ETA=0:06:20
[11/26 22:13:20 d2.evaluation.evaluator]: Inference done 141/4952. 0.0760 s / img. ETA=0:06:14
[11/26 22:13:25 d2.evaluation.evaluator]: Inference done 206/4952. 0.0760 s / img. ETA=0:06:09
[11/26 22:13:30 d2.evaluation.evaluator]: Inference done 271/4952. 0.0759 s / img. ETA=0:06:04
[11/26 22:13:35 d2.evaluation.evaluator]: Inference done 336/4952. 0.0759 s / img. ETA=0:05:59
[11/26 22:13:40 d2.evaluation.evaluator]: Inference done 400/4952. 0.0761 s / img. ETA=0:05:55
[11/26 22:13:45 d2.evaluation.evaluator]: Inference done 464/4952. 0.0761 s / img. ETA=0:05:50
[11/26 22:13:51 d2.evaluation.evaluator]: Inference done 529/4952. 0.0761 s / img. ETA=0:05:45
[11/26 22:13:56 d2.evaluation.evaluator]: Inference done 593/4952. 0.0761 s / img. ETA=0:05:40
[11/26 22:14:01 d2.evaluation.evaluator]: Inference done 658/4952. 0.0761 s / img. ETA=0:05:34
[11/26 22:14:06 d2.evaluation.evaluator]: Inference done 722/4952. 0.0762 s / img. ETA=0:05:30
[11/26 22:14:11 d2.evaluation.evaluator]: Inference done 786/4952. 0.0762 s / img. ETA=0:05:25
[11/26 22:14:16 d2.evaluation.evaluator]: Inference done 850/4952. 0.0762 s / img. ETA=0:05:20
[11/26 22:14:21 d2.evaluation.evaluator]: Inference done 914/4952. 0.0763 s / img. ETA=0:05:15
[11/26 22:14:26 d2.evaluation.evaluator]: Inference done 978/4952. 0.0763 s / img. ETA=0:05:10
[11/26 22:14:31 d2.evaluation.evaluator]: Inference done 1042/4952. 0.0764 s / img. ETA=0:05:06
[11/26 22:14:36 d2.evaluation.evaluator]: Inference done 1106/4952. 0.0764 s / img. ETA=0:05:01
[11/26 22:14:41 d2.evaluation.evaluator]: Inference done 1170/4952. 0.0764 s / img. ETA=0:04:56
[11/26 22:14:46 d2.evaluation.evaluator]: Inference done 1234/4952. 0.0765 s / img. ETA=0:04:51
[11/26 22:14:51 d2.evaluation.evaluator]: Inference done 1298/4952. 0.0765 s / img. ETA=0:04:46
[11/26 22:14:56 d2.evaluation.evaluator]: Inference done 1362/4952. 0.0765 s / img. ETA=0:04:41
[11/26 22:15:01 d2.evaluation.evaluator]: Inference done 1426/4952. 0.0765 s / img. ETA=0:04:36
[11/26 22:15:06 d2.evaluation.evaluator]: Inference done 1490/4952. 0.0765 s / img. ETA=0:04:31
[11/26 22:15:11 d2.evaluation.evaluator]: Inference done 1554/4952. 0.0766 s / img. ETA=0:04:26
[11/26 22:15:16 d2.evaluation.evaluator]: Inference done 1617/4952. 0.0766 s / img. ETA=0:04:21
[11/26 22:15:21 d2.evaluation.evaluator]: Inference done 1682/4952. 0.0766 s / img. ETA=0:04:16
[11/26 22:15:26 d2.evaluation.evaluator]: Inference done 1746/4952. 0.0766 s / img. ETA=0:04:11
[11/26 22:15:31 d2.evaluation.evaluator]: Inference done 1810/4952. 0.0766 s / img. ETA=0:04:06
[11/26 22:15:36 d2.evaluation.evaluator]: Inference done 1875/4952. 0.0766 s / img. ETA=0:04:01
[11/26 22:15:42 d2.evaluation.evaluator]: Inference done 1939/4952. 0.0766 s / img. ETA=0:03:56
[11/26 22:15:47 d2.evaluation.evaluator]: Inference done 2003/4952. 0.0766 s / img. ETA=0:03:51
[11/26 22:15:52 d2.evaluation.evaluator]: Inference done 2067/4952. 0.0766 s / img. ETA=0:03:46
[11/26 22:15:57 d2.evaluation.evaluator]: Inference done 2130/4952. 0.0766 s / img. ETA=0:03:41
[11/26 22:16:02 d2.evaluation.evaluator]: Inference done 2194/4952. 0.0767 s / img. ETA=0:03:36
[11/26 22:16:07 d2.evaluation.evaluator]: Inference done 2258/4952. 0.0767 s / img. ETA=0:03:31
[11/26 22:16:12 d2.evaluation.evaluator]: Inference done 2322/4952. 0.0766 s / img. ETA=0:03:26
[11/26 22:16:17 d2.evaluation.evaluator]: Inference done 2386/4952. 0.0767 s / img. ETA=0:03:21
[11/26 22:16:22 d2.evaluation.evaluator]: Inference done 2450/4952. 0.0767 s / img. ETA=0:03:16
[11/26 22:16:27 d2.evaluation.evaluator]: Inference done 2514/4952. 0.0767 s / img. ETA=0:03:11
[11/26 22:16:32 d2.evaluation.evaluator]: Inference done 2578/4952. 0.0767 s / img. ETA=0:03:06
[11/26 22:16:37 d2.evaluation.evaluator]: Inference done 2642/4952. 0.0767 s / img. ETA=0:03:01
[11/26 22:16:42 d2.evaluation.evaluator]: Inference done 2705/4952. 0.0767 s / img. ETA=0:02:56
[11/26 22:16:47 d2.evaluation.evaluator]: Inference done 2768/4952. 0.0768 s / img. ETA=0:02:51
[11/26 22:16:52 d2.evaluation.evaluator]: Inference done 2832/4952. 0.0768 s / img. ETA=0:02:46
[11/26 22:16:57 d2.evaluation.evaluator]: Inference done 2897/4952. 0.0767 s / img. ETA=0:02:41
[11/26 22:17:02 d2.evaluation.evaluator]: Inference done 2961/4952. 0.0767 s / img. ETA=0:02:36
[11/26 22:17:07 d2.evaluation.evaluator]: Inference done 3025/4952. 0.0768 s / img. ETA=0:02:31
[11/26 22:17:12 d2.evaluation.evaluator]: Inference done 3089/4952. 0.0768 s / img. ETA=0:02:26
[11/26 22:17:17 d2.evaluation.evaluator]: Inference done 3153/4952. 0.0768 s / img. ETA=0:02:21
[11/26 22:17:22 d2.evaluation.evaluator]: Inference done 3217/4952. 0.0768 s / img. ETA=0:02:16
[11/26 22:17:27 d2.evaluation.evaluator]: Inference done 3281/4952. 0.0768 s / img. ETA=0:02:11
[11/26 22:17:32 d2.evaluation.evaluator]: Inference done 3345/4952. 0.0768 s / img. ETA=0:02:06
[11/26 22:17:38 d2.evaluation.evaluator]: Inference done 3409/4952. 0.0768 s / img. ETA=0:02:01
[11/26 22:17:43 d2.evaluation.evaluator]: Inference done 3472/4952. 0.0768 s / img. ETA=0:01:56
[11/26 22:17:48 d2.evaluation.evaluator]: Inference done 3535/4952. 0.0768 s / img. ETA=0:01:51
[11/26 22:17:53 d2.evaluation.evaluator]: Inference done 3598/4952. 0.0768 s / img. ETA=0:01:46
[11/26 22:17:58 d2.evaluation.evaluator]: Inference done 3662/4952. 0.0768 s / img. ETA=0:01:41
[11/26 22:18:03 d2.evaluation.evaluator]: Inference done 3726/4952. 0.0768 s / img. ETA=0:01:36
[11/26 22:18:08 d2.evaluation.evaluator]: Inference done 3790/4952. 0.0768 s / img. ETA=0:01:31
[11/26 22:18:13 d2.evaluation.evaluator]: Inference done 3854/4952. 0.0768 s / img. ETA=0:01:26
[11/26 22:18:18 d2.evaluation.evaluator]: Inference done 3918/4952. 0.0768 s / img. ETA=0:01:21
[11/26 22:18:23 d2.evaluation.evaluator]: Inference done 3982/4952. 0.0768 s / img. ETA=0:01:16
[11/26 22:18:28 d2.evaluation.evaluator]: Inference done 4045/4952. 0.0769 s / img. ETA=0:01:11
[11/26 22:18:33 d2.evaluation.evaluator]: Inference done 4109/4952. 0.0768 s / img. ETA=0:01:06
[11/26 22:18:38 d2.evaluation.evaluator]: Inference done 4174/4952. 0.0768 s / img. ETA=0:01:01
[11/26 22:18:43 d2.evaluation.evaluator]: Inference done 4237/4952. 0.0768 s / img. ETA=0:00:56
[11/26 22:18:48 d2.evaluation.evaluator]: Inference done 4300/4952. 0.0769 s / img. ETA=0:00:51
[11/26 22:18:53 d2.evaluation.evaluator]: Inference done 4363/4952. 0.0769 s / img. ETA=0:00:46
[11/26 22:18:58 d2.evaluation.evaluator]: Inference done 4427/4952. 0.0769 s / img. ETA=0:00:41
[11/26 22:19:03 d2.evaluation.evaluator]: Inference done 4491/4952. 0.0769 s / img. ETA=0:00:36
[11/26 22:19:08 d2.evaluation.evaluator]: Inference done 4555/4952. 0.0769 s / img. ETA=0:00:31
[11/26 22:19:13 d2.evaluation.evaluator]: Inference done 4619/4952. 0.0769 s / img. ETA=0:00:26
[11/26 22:19:18 d2.evaluation.evaluator]: Inference done 4683/4952. 0.0769 s / img. ETA=0:00:21
[11/26 22:19:23 d2.evaluation.evaluator]: Inference done 4747/4952. 0.0769 s / img. ETA=0:00:16
[11/26 22:19:28 d2.evaluation.evaluator]: Inference done 4811/4952. 0.0769 s / img. ETA=0:00:11
[11/26 22:19:33 d2.evaluation.evaluator]: Inference done 4875/4952. 0.0769 s / img. ETA=0:00:06
[11/26 22:19:38 d2.evaluation.evaluator]: Inference done 4939/4952. 0.0769 s / img. ETA=0:00:01
[11/26 22:19:39 d2.evaluation.evaluator]: Total inference time: 0:06:29.762767 (0.078788 s / img per device, on 1 devices)
[11/26 22:19:39 d2.evaluation.evaluator]: Total inference pure compute time: 0:06:20 (0.076857 s / img per device, on 1 devices)
[11/26 22:19:39 detectron2]: Image level evaluation complete for custom_voc_2007_test
[11/26 22:19:39 detectron2]: Results for custom_voc_2007_test
[11/26 22:19:40 detectron2]: AP for class no. 0: 0.8057809472084045
[11/26 22:19:40 detectron2]: AP for class no. 1: 0.7805687189102173
[11/26 22:19:40 detectron2]: AP for class no. 2: 0.6984018683433533
[11/26 22:19:40 detectron2]: AP for class no. 3: 0.6062182784080505
[11/26 22:19:40 detectron2]: AP for class no. 4: 0.5762754082679749
[11/26 22:19:41 detectron2]: AP for class no. 5: 0.7758022546768188
[11/26 22:19:41 detectron2]: AP for class no. 6: 0.7926622629165649
[11/26 22:19:41 detectron2]: AP for class no. 7: 0.8785473108291626
[11/26 22:19:41 detectron2]: AP for class no. 8: 0.5703489184379578
[11/26 22:19:41 detectron2]: AP for class no. 9: 0.7864148616790771
[11/26 22:19:41 detectron2]: AP for class no. 10: 0.6691756844520569
[11/26 22:19:41 detectron2]: AP for class no. 11: 0.8530338406562805
[11/26 22:19:42 detectron2]: AP for class no. 12: 0.8538536429405212
[11/26 22:19:42 detectron2]: AP for class no. 13: 0.8232013583183289
[11/26 22:19:42 detectron2]: AP for class no. 14: 0.786263108253479
[11/26 22:19:42 detectron2]: AP for class no. 15: 0.4942459166049957
[11/26 22:19:42 detectron2]: AP for class no. 16: 0.7428907155990601
[11/26 22:19:42 detectron2]: AP for class no. 17: 0.6694597601890564
[11/26 22:19:42 detectron2]: AP for class no. 18: 0.8503661751747131
[11/26 22:19:43 detectron2]: AP for class no. 19: 0.761128842830658
[11/26 22:19:43 detectron2]: mAP: 0.7387319207191467
WARNING [11/26 22:19:43 d2.data.datasets.coco]: 
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

[11/26 22:19:43 d2.data.datasets.coco]: Loaded 4952 images in COCO format from protocol/custom_protocols/WR1_Mixed_Unknowns.json
[11/26 22:19:43 d2.data.build]: Distribution of instances among all 21 categories:
|  category  | #instances   |  category   | #instances   |  category   | #instances   |
|:----------:|:-------------|:-----------:|:-------------|:-----------:|:-------------|
|  unknown   | 15235        |  aeroplane  | 0            |   bicycle   | 0            |
|    bird    | 0            |    boat     | 0            |   bottle    | 0            |
|    bus     | 0            |     car     | 0            |     cat     | 0            |
|   chair    | 0            |     cow     | 0            | diningtable | 0            |
|    dog     | 0            |    horse    | 0            |  motorbike  | 0            |
|   person   | 0            | pottedplant | 0            |    sheep    | 0            |
|    sofa    | 0            |    train    | 0            |  tvmonitor  | 0            |
|            |              |             |              |             |              |
|   total    | 15235        |             |              |             |              |
[11/26 22:19:43 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[11/26 22:19:43 d2.data.common]: Serializing 4952 elements to byte tensors and concatenating them all ...
[11/26 22:19:43 d2.data.common]: Serialized dataset takes 8.39 MiB
[11/26 22:19:43 d2.evaluation.evaluator]: Start inference on 4952 images
[11/26 22:19:45 d2.evaluation.evaluator]: Inference done 11/4952. 0.0770 s / img. ETA=0:06:28
[11/26 22:19:50 d2.evaluation.evaluator]: Inference done 74/4952. 0.0777 s / img. ETA=0:06:27
[11/26 22:19:55 d2.evaluation.evaluator]: Inference done 138/4952. 0.0773 s / img. ETA=0:06:20
[11/26 22:20:00 d2.evaluation.evaluator]: Inference done 202/4952. 0.0774 s / img. ETA=0:06:15
[11/26 22:20:05 d2.evaluation.evaluator]: Inference done 265/4952. 0.0776 s / img. ETA=0:06:11
[11/26 22:20:10 d2.evaluation.evaluator]: Inference done 328/4952. 0.0776 s / img. ETA=0:06:07
[11/26 22:20:15 d2.evaluation.evaluator]: Inference done 391/4952. 0.0778 s / img. ETA=0:06:02
[11/26 22:20:20 d2.evaluation.evaluator]: Inference done 455/4952. 0.0777 s / img. ETA=0:05:57
[11/26 22:20:25 d2.evaluation.evaluator]: Inference done 518/4952. 0.0778 s / img. ETA=0:05:52
[11/26 22:20:30 d2.evaluation.evaluator]: Inference done 582/4952. 0.0776 s / img. ETA=0:05:47
[11/26 22:20:35 d2.evaluation.evaluator]: Inference done 645/4952. 0.0777 s / img. ETA=0:05:42
[11/26 22:20:40 d2.evaluation.evaluator]: Inference done 710/4952. 0.0776 s / img. ETA=0:05:36
[11/26 22:20:45 d2.evaluation.evaluator]: Inference done 774/4952. 0.0776 s / img. ETA=0:05:31
[11/26 22:20:50 d2.evaluation.evaluator]: Inference done 838/4952. 0.0775 s / img. ETA=0:05:26
[11/26 22:20:55 d2.evaluation.evaluator]: Inference done 902/4952. 0.0775 s / img. ETA=0:05:21
[11/26 22:21:00 d2.evaluation.evaluator]: Inference done 966/4952. 0.0775 s / img. ETA=0:05:15
[11/26 22:21:05 d2.evaluation.evaluator]: Inference done 1030/4952. 0.0774 s / img. ETA=0:05:10
[11/26 22:21:11 d2.evaluation.evaluator]: Inference done 1094/4952. 0.0774 s / img. ETA=0:05:05
[11/26 22:21:16 d2.evaluation.evaluator]: Inference done 1157/4952. 0.0774 s / img. ETA=0:05:00
[11/26 22:21:21 d2.evaluation.evaluator]: Inference done 1220/4952. 0.0774 s / img. ETA=0:04:55
[11/26 22:21:26 d2.evaluation.evaluator]: Inference done 1283/4952. 0.0775 s / img. ETA=0:04:50
[11/26 22:21:31 d2.evaluation.evaluator]: Inference done 1346/4952. 0.0775 s / img. ETA=0:04:45
[11/26 22:21:36 d2.evaluation.evaluator]: Inference done 1409/4952. 0.0775 s / img. ETA=0:04:41
[11/26 22:21:41 d2.evaluation.evaluator]: Inference done 1473/4952. 0.0775 s / img. ETA=0:04:35
[11/26 22:21:46 d2.evaluation.evaluator]: Inference done 1537/4952. 0.0775 s / img. ETA=0:04:30
[11/26 22:21:51 d2.evaluation.evaluator]: Inference done 1600/4952. 0.0775 s / img. ETA=0:04:25
[11/26 22:21:56 d2.evaluation.evaluator]: Inference done 1662/4952. 0.0776 s / img. ETA=0:04:21
[11/26 22:22:01 d2.evaluation.evaluator]: Inference done 1726/4952. 0.0775 s / img. ETA=0:04:16
[11/26 22:22:06 d2.evaluation.evaluator]: Inference done 1791/4952. 0.0775 s / img. ETA=0:04:10
[11/26 22:22:11 d2.evaluation.evaluator]: Inference done 1855/4952. 0.0775 s / img. ETA=0:04:05
[11/26 22:22:16 d2.evaluation.evaluator]: Inference done 1919/4952. 0.0775 s / img. ETA=0:04:00
[11/26 22:22:21 d2.evaluation.evaluator]: Inference done 1981/4952. 0.0775 s / img. ETA=0:03:55
[11/26 22:22:26 d2.evaluation.evaluator]: Inference done 2044/4952. 0.0775 s / img. ETA=0:03:50
[11/26 22:22:31 d2.evaluation.evaluator]: Inference done 2107/4952. 0.0776 s / img. ETA=0:03:45
[11/26 22:22:36 d2.evaluation.evaluator]: Inference done 2171/4952. 0.0775 s / img. ETA=0:03:40
[11/26 22:22:41 d2.evaluation.evaluator]: Inference done 2234/4952. 0.0775 s / img. ETA=0:03:35
[11/26 22:22:46 d2.evaluation.evaluator]: Inference done 2297/4952. 0.0776 s / img. ETA=0:03:30
[11/26 22:22:51 d2.evaluation.evaluator]: Inference done 2360/4952. 0.0776 s / img. ETA=0:03:25
[11/26 22:22:56 d2.evaluation.evaluator]: Inference done 2424/4952. 0.0776 s / img. ETA=0:03:20
[11/26 22:23:01 d2.evaluation.evaluator]: Inference done 2487/4952. 0.0776 s / img. ETA=0:03:15
[11/26 22:23:06 d2.evaluation.evaluator]: Inference done 2551/4952. 0.0776 s / img. ETA=0:03:10
[11/26 22:23:11 d2.evaluation.evaluator]: Inference done 2614/4952. 0.0776 s / img. ETA=0:03:05
[11/26 22:23:16 d2.evaluation.evaluator]: Inference done 2678/4952. 0.0776 s / img. ETA=0:03:00
[11/26 22:23:22 d2.evaluation.evaluator]: Inference done 2742/4952. 0.0776 s / img. ETA=0:02:55
[11/26 22:23:27 d2.evaluation.evaluator]: Inference done 2807/4952. 0.0775 s / img. ETA=0:02:50
[11/26 22:23:32 d2.evaluation.evaluator]: Inference done 2871/4952. 0.0775 s / img. ETA=0:02:45
[11/26 22:23:37 d2.evaluation.evaluator]: Inference done 2934/4952. 0.0775 s / img. ETA=0:02:40
[11/26 22:23:42 d2.evaluation.evaluator]: Inference done 2998/4952. 0.0775 s / img. ETA=0:02:34
[11/26 22:23:47 d2.evaluation.evaluator]: Inference done 3061/4952. 0.0775 s / img. ETA=0:02:30
[11/26 22:23:52 d2.evaluation.evaluator]: Inference done 3125/4952. 0.0775 s / img. ETA=0:02:24
[11/26 22:23:57 d2.evaluation.evaluator]: Inference done 3189/4952. 0.0775 s / img. ETA=0:02:19
[11/26 22:24:02 d2.evaluation.evaluator]: Inference done 3252/4952. 0.0775 s / img. ETA=0:02:14
[11/26 22:24:07 d2.evaluation.evaluator]: Inference done 3315/4952. 0.0775 s / img. ETA=0:02:09
[11/26 22:24:12 d2.evaluation.evaluator]: Inference done 3378/4952. 0.0775 s / img. ETA=0:02:04
[11/26 22:24:17 d2.evaluation.evaluator]: Inference done 3443/4952. 0.0775 s / img. ETA=0:01:59
[11/26 22:24:22 d2.evaluation.evaluator]: Inference done 3506/4952. 0.0775 s / img. ETA=0:01:54
[11/26 22:24:27 d2.evaluation.evaluator]: Inference done 3569/4952. 0.0775 s / img. ETA=0:01:49
[11/26 22:24:32 d2.evaluation.evaluator]: Inference done 3632/4952. 0.0775 s / img. ETA=0:01:44
[11/26 22:24:37 d2.evaluation.evaluator]: Inference done 3696/4952. 0.0775 s / img. ETA=0:01:39
[11/26 22:24:42 d2.evaluation.evaluator]: Inference done 3760/4952. 0.0775 s / img. ETA=0:01:34
[11/26 22:24:47 d2.evaluation.evaluator]: Inference done 3822/4952. 0.0775 s / img. ETA=0:01:29
[11/26 22:24:52 d2.evaluation.evaluator]: Inference done 3886/4952. 0.0775 s / img. ETA=0:01:24
[11/26 22:24:57 d2.evaluation.evaluator]: Inference done 3949/4952. 0.0775 s / img. ETA=0:01:19
[11/26 22:25:02 d2.evaluation.evaluator]: Inference done 4013/4952. 0.0775 s / img. ETA=0:01:14
[11/26 22:25:07 d2.evaluation.evaluator]: Inference done 4076/4952. 0.0775 s / img. ETA=0:01:09
[11/26 22:25:12 d2.evaluation.evaluator]: Inference done 4139/4952. 0.0775 s / img. ETA=0:01:04
[11/26 22:25:17 d2.evaluation.evaluator]: Inference done 4202/4952. 0.0776 s / img. ETA=0:00:59
[11/26 22:25:22 d2.evaluation.evaluator]: Inference done 4265/4952. 0.0776 s / img. ETA=0:00:54
[11/26 22:25:28 d2.evaluation.evaluator]: Inference done 4329/4952. 0.0776 s / img. ETA=0:00:49
[11/26 22:25:33 d2.evaluation.evaluator]: Inference done 4393/4952. 0.0776 s / img. ETA=0:00:44
[11/26 22:25:38 d2.evaluation.evaluator]: Inference done 4457/4952. 0.0776 s / img. ETA=0:00:39
[11/26 22:25:43 d2.evaluation.evaluator]: Inference done 4521/4952. 0.0776 s / img. ETA=0:00:34
[11/26 22:25:48 d2.evaluation.evaluator]: Inference done 4584/4952. 0.0776 s / img. ETA=0:00:29
[11/26 22:25:53 d2.evaluation.evaluator]: Inference done 4648/4952. 0.0776 s / img. ETA=0:00:24
[11/26 22:25:58 d2.evaluation.evaluator]: Inference done 4712/4952. 0.0775 s / img. ETA=0:00:19
[11/26 22:26:03 d2.evaluation.evaluator]: Inference done 4775/4952. 0.0776 s / img. ETA=0:00:14
[11/26 22:26:08 d2.evaluation.evaluator]: Inference done 4838/4952. 0.0776 s / img. ETA=0:00:09
[11/26 22:26:13 d2.evaluation.evaluator]: Inference done 4901/4952. 0.0776 s / img. ETA=0:00:04
[11/26 22:26:17 d2.evaluation.evaluator]: Total inference time: 0:06:32.709267 (0.079383 s / img per device, on 1 devices)
[11/26 22:26:17 d2.evaluation.evaluator]: Total inference pure compute time: 0:06:23 (0.077551 s / img per device, on 1 devices)
[11/26 22:26:17 detectron2]: Image level evaluation complete for WR1_Mixed_Unknowns
[11/26 22:26:17 detectron2]: Results for WR1_Mixed_Unknowns
[11/26 22:26:17 detectron2]: AP for class no. 0: 0.0
[11/26 22:26:17 detectron2]: AP for class no. 1: 0.0
[11/26 22:26:18 detectron2]: AP for class no. 2: 0.0
[11/26 22:26:18 detectron2]: AP for class no. 3: 0.0
[11/26 22:26:18 detectron2]: AP for class no. 4: 0.0
[11/26 22:26:18 detectron2]: AP for class no. 5: 0.0
[11/26 22:26:18 detectron2]: AP for class no. 6: 0.0
[11/26 22:26:18 detectron2]: AP for class no. 7: 0.0
[11/26 22:26:19 detectron2]: AP for class no. 8: 0.0
[11/26 22:26:19 detectron2]: AP for class no. 9: 0.0
[11/26 22:26:19 detectron2]: AP for class no. 10: 0.0
[11/26 22:26:19 detectron2]: AP for class no. 11: 0.0
[11/26 22:26:19 detectron2]: AP for class no. 12: 0.0
[11/26 22:26:19 detectron2]: AP for class no. 13: 0.0
[11/26 22:26:19 detectron2]: AP for class no. 14: 0.0
[11/26 22:26:20 detectron2]: AP for class no. 15: 0.0
[11/26 22:26:20 detectron2]: AP for class no. 16: 0.0
[11/26 22:26:20 detectron2]: AP for class no. 17: 0.0
[11/26 22:26:20 detectron2]: AP for class no. 18: 0.0
[11/26 22:26:20 detectron2]: AP for class no. 19: 0.0
[11/26 22:26:20 detectron2]: mAP: 0.0
[11/26 22:26:20 detectron2]: Combined results for datasets custom_voc_2007_test, WR1_Mixed_Unknowns
[11/26 22:26:20 detectron2]: AP for class no. 0: 0.7918533086776733
[11/26 22:26:20 detectron2]: AP for class no. 1: 0.7716333866119385
[11/26 22:26:20 detectron2]: AP for class no. 2: 0.6370219588279724
[11/26 22:26:20 detectron2]: AP for class no. 3: 0.5901530385017395
[11/26 22:26:20 detectron2]: AP for class no. 4: 0.5282232165336609
[11/26 22:26:20 detectron2]: AP for class no. 5: 0.746399462223053
[11/26 22:26:20 detectron2]: AP for class no. 6: 0.7705965638160706
[11/26 22:26:20 detectron2]: AP for class no. 7: 0.8529040813446045
[11/26 22:26:20 detectron2]: AP for class no. 8: 0.5244924426078796
[11/26 22:26:20 detectron2]: AP for class no. 9: 0.6216275691986084
[11/26 22:26:20 detectron2]: AP for class no. 10: 0.4555111825466156
[11/26 22:26:20 detectron2]: AP for class no. 11: 0.7658306360244751
[11/26 22:26:20 detectron2]: AP for class no. 12: 0.7069307565689087
[11/26 22:26:20 detectron2]: AP for class no. 13: 0.8073904514312744
[11/26 22:26:20 detectron2]: AP for class no. 14: 0.7774863839149475
[11/26 22:26:20 detectron2]: AP for class no. 15: 0.40883171558380127
[11/26 22:26:20 detectron2]: AP for class no. 16: 0.6260425448417664
[11/26 22:26:20 detectron2]: AP for class no. 17: 0.5717162489891052
[11/26 22:26:20 detectron2]: AP for class no. 18: 0.8286178708076477
[11/26 22:26:20 detectron2]: AP for class no. 19: 0.6816982626914978
[11/26 22:26:20 detectron2]: mAP: 0.6732480525970459
[11/26 22:26:21 detectron2]: ************************** Performance at Wilderness level 0.00 **************************
[11/26 22:26:21 detectron2]: AP for class no. 0 at wilderness 0.00: 0.8057809472084045
[11/26 22:26:21 detectron2]: AP for class no. 1 at wilderness 0.00: 0.7805687189102173
[11/26 22:26:21 detectron2]: AP for class no. 2 at wilderness 0.00: 0.6984018683433533
[11/26 22:26:21 detectron2]: AP for class no. 3 at wilderness 0.00: 0.6062182784080505
[11/26 22:26:21 detectron2]: AP for class no. 4 at wilderness 0.00: 0.5762754082679749
[11/26 22:26:21 detectron2]: AP for class no. 5 at wilderness 0.00: 0.7758022546768188
[11/26 22:26:21 detectron2]: AP for class no. 6 at wilderness 0.00: 0.7926622629165649
[11/26 22:26:21 detectron2]: AP for class no. 7 at wilderness 0.00: 0.8785473108291626
[11/26 22:26:21 detectron2]: AP for class no. 8 at wilderness 0.00: 0.5703489184379578
[11/26 22:26:21 detectron2]: AP for class no. 9 at wilderness 0.00: 0.7864148616790771
[11/26 22:26:22 detectron2]: AP for class no. 10 at wilderness 0.00: 0.6691756844520569
[11/26 22:26:22 detectron2]: AP for class no. 11 at wilderness 0.00: 0.8530338406562805
[11/26 22:26:22 detectron2]: AP for class no. 12 at wilderness 0.00: 0.8538536429405212
[11/26 22:26:22 detectron2]: AP for class no. 13 at wilderness 0.00: 0.8232013583183289
[11/26 22:26:22 detectron2]: AP for class no. 14 at wilderness 0.00: 0.786263108253479
[11/26 22:26:22 detectron2]: AP for class no. 15 at wilderness 0.00: 0.4942459166049957
[11/26 22:26:22 detectron2]: AP for class no. 16 at wilderness 0.00: 0.7428907155990601
[11/26 22:26:22 detectron2]: AP for class no. 17 at wilderness 0.00: 0.6694597601890564
[11/26 22:26:22 detectron2]: AP for class no. 18 at wilderness 0.00: 0.8503661751747131
[11/26 22:26:22 detectron2]: AP for class no. 19 at wilderness 0.00: 0.761128842830658
[11/26 22:26:22 detectron2]: mAP at wilderness 0.00: 0.7387319207191467
[11/26 22:26:22 detectron2]: ************************** Performance at Wilderness level 0.10 **************************
[11/26 22:26:22 detectron2]: AP for class no. 0 at wilderness 0.10: 0.804438054561615
[11/26 22:26:22 detectron2]: AP for class no. 1 at wilderness 0.10: 0.7796733975410461
[11/26 22:26:22 detectron2]: AP for class no. 2 at wilderness 0.10: 0.6916602253913879
[11/26 22:26:22 detectron2]: AP for class no. 3 at wilderness 0.10: 0.6043260097503662
[11/26 22:26:22 detectron2]: AP for class no. 4 at wilderness 0.10: 0.5696216821670532
[11/26 22:26:22 detectron2]: AP for class no. 5 at wilderness 0.10: 0.7739158868789673
[11/26 22:26:22 detectron2]: AP for class no. 6 at wilderness 0.10: 0.7923885583877563
[11/26 22:26:22 detectron2]: AP for class no. 7 at wilderness 0.10: 0.8749563694000244
[11/26 22:26:22 detectron2]: AP for class no. 8 at wilderness 0.10: 0.566605806350708
[11/26 22:26:22 detectron2]: AP for class no. 9 at wilderness 0.10: 0.7556943893432617
[11/26 22:26:22 detectron2]: AP for class no. 10 at wilderness 0.10: 0.6324914693832397
[11/26 22:26:22 detectron2]: AP for class no. 11 at wilderness 0.10: 0.83938068151474
[11/26 22:26:22 detectron2]: AP for class no. 12 at wilderness 0.10: 0.8256282806396484
[11/26 22:26:22 detectron2]: AP for class no. 13 at wilderness 0.10: 0.8221734762191772
[11/26 22:26:22 detectron2]: AP for class no. 14 at wilderness 0.10: 0.7857226729393005
[11/26 22:26:22 detectron2]: AP for class no. 15 at wilderness 0.10: 0.4817371666431427
[11/26 22:26:22 detectron2]: AP for class no. 16 at wilderness 0.10: 0.7131674289703369
[11/26 22:26:22 detectron2]: AP for class no. 17 at wilderness 0.10: 0.6609698534011841
[11/26 22:26:22 detectron2]: AP for class no. 18 at wilderness 0.10: 0.8460453152656555
[11/26 22:26:22 detectron2]: AP for class no. 19 at wilderness 0.10: 0.752105176448822
[11/26 22:26:22 detectron2]: mAP at wilderness 0.10: 0.7286350131034851
[11/26 22:26:22 detectron2]: ************************** Performance at Wilderness level 0.20 **************************
[11/26 22:26:22 detectron2]: AP for class no. 0 at wilderness 0.20: 0.8038345575332642
[11/26 22:26:22 detectron2]: AP for class no. 1 at wilderness 0.20: 0.7789458632469177
[11/26 22:26:22 detectron2]: AP for class no. 2 at wilderness 0.20: 0.6834487318992615
[11/26 22:26:22 detectron2]: AP for class no. 3 at wilderness 0.20: 0.6027707457542419
[11/26 22:26:22 detectron2]: AP for class no. 4 at wilderness 0.20: 0.5652671456336975
[11/26 22:26:22 detectron2]: AP for class no. 5 at wilderness 0.20: 0.7709394097328186
[11/26 22:26:22 detectron2]: AP for class no. 6 at wilderness 0.20: 0.7894852161407471
[11/26 22:26:22 detectron2]: AP for class no. 7 at wilderness 0.20: 0.8729369640350342
[11/26 22:26:22 detectron2]: AP for class no. 8 at wilderness 0.20: 0.5598958730697632
[11/26 22:26:22 detectron2]: AP for class no. 9 at wilderness 0.20: 0.726253092288971
[11/26 22:26:22 detectron2]: AP for class no. 10 at wilderness 0.20: 0.6147267818450928
[11/26 22:26:22 detectron2]: AP for class no. 11 at wilderness 0.20: 0.831853449344635
[11/26 22:26:22 detectron2]: AP for class no. 12 at wilderness 0.20: 0.8023135662078857
[11/26 22:26:22 detectron2]: AP for class no. 13 at wilderness 0.20: 0.8200926184654236
[11/26 22:26:22 detectron2]: AP for class no. 14 at wilderness 0.20: 0.7845827341079712
[11/26 22:26:22 detectron2]: AP for class no. 15 at wilderness 0.20: 0.4718787968158722
[11/26 22:26:22 detectron2]: AP for class no. 16 at wilderness 0.20: 0.6918453574180603
[11/26 22:26:22 detectron2]: AP for class no. 17 at wilderness 0.20: 0.6525976061820984
[11/26 22:26:22 detectron2]: AP for class no. 18 at wilderness 0.20: 0.8430576324462891
[11/26 22:26:22 detectron2]: AP for class no. 19 at wilderness 0.20: 0.742803156375885
[11/26 22:26:22 detectron2]: mAP at wilderness 0.20: 0.720476508140564
[11/26 22:26:22 detectron2]: ************************** Performance at Wilderness level 0.30 **************************
[11/26 22:26:22 detectron2]: AP for class no. 0 at wilderness 0.30: 0.8021126389503479
[11/26 22:26:22 detectron2]: AP for class no. 1 at wilderness 0.30: 0.7785171270370483
[11/26 22:26:22 detectron2]: AP for class no. 2 at wilderness 0.30: 0.6781996488571167
[11/26 22:26:22 detectron2]: AP for class no. 3 at wilderness 0.30: 0.6018539071083069
[11/26 22:26:22 detectron2]: AP for class no. 4 at wilderness 0.30: 0.5566883683204651
[11/26 22:26:22 detectron2]: AP for class no. 5 at wilderness 0.30: 0.7685409784317017
[11/26 22:26:22 detectron2]: AP for class no. 6 at wilderness 0.30: 0.7867055535316467
[11/26 22:26:22 detectron2]: AP for class no. 7 at wilderness 0.30: 0.8666488528251648
[11/26 22:26:22 detectron2]: AP for class no. 8 at wilderness 0.30: 0.5532926321029663
[11/26 22:26:22 detectron2]: AP for class no. 9 at wilderness 0.30: 0.694472074508667
[11/26 22:26:22 detectron2]: AP for class no. 10 at wilderness 0.30: 0.5853824019432068
[11/26 22:26:22 detectron2]: AP for class no. 11 at wilderness 0.30: 0.8148987293243408
[11/26 22:26:22 detectron2]: AP for class no. 12 at wilderness 0.30: 0.7823802828788757
[11/26 22:26:22 detectron2]: AP for class no. 13 at wilderness 0.30: 0.818779706954956
[11/26 22:26:22 detectron2]: AP for class no. 14 at wilderness 0.30: 0.783781111240387
[11/26 22:26:22 detectron2]: AP for class no. 15 at wilderness 0.30: 0.4620145857334137
[11/26 22:26:22 detectron2]: AP for class no. 16 at wilderness 0.30: 0.6779978275299072
[11/26 22:26:22 detectron2]: AP for class no. 17 at wilderness 0.30: 0.6455773711204529
[11/26 22:26:22 detectron2]: AP for class no. 18 at wilderness 0.30: 0.8411329388618469
[11/26 22:26:22 detectron2]: AP for class no. 19 at wilderness 0.30: 0.7316110134124756
[11/26 22:26:22 detectron2]: mAP at wilderness 0.30: 0.7115293741226196
[11/26 22:26:22 detectron2]: ************************** Performance at Wilderness level 0.40 **************************
[11/26 22:26:22 detectron2]: AP for class no. 0 at wilderness 0.40: 0.8021126389503479
[11/26 22:26:22 detectron2]: AP for class no. 1 at wilderness 0.40: 0.7769022583961487
[11/26 22:26:22 detectron2]: AP for class no. 2 at wilderness 0.40: 0.6708421111106873
[11/26 22:26:22 detectron2]: AP for class no. 3 at wilderness 0.40: 0.5991898775100708
[11/26 22:26:22 detectron2]: AP for class no. 4 at wilderness 0.40: 0.5525819659233093
[11/26 22:26:22 detectron2]: AP for class no. 5 at wilderness 0.40: 0.7674712538719177
[11/26 22:26:22 detectron2]: AP for class no. 6 at wilderness 0.40: 0.7853753566741943
[11/26 22:26:22 detectron2]: AP for class no. 7 at wilderness 0.40: 0.8651075959205627
[11/26 22:26:22 detectron2]: AP for class no. 8 at wilderness 0.40: 0.5485300421714783
[11/26 22:26:22 detectron2]: AP for class no. 9 at wilderness 0.40: 0.6755051016807556
[11/26 22:26:22 detectron2]: AP for class no. 10 at wilderness 0.40: 0.5651013255119324
[11/26 22:26:22 detectron2]: AP for class no. 11 at wilderness 0.40: 0.8073036074638367
[11/26 22:26:22 detectron2]: AP for class no. 12 at wilderness 0.40: 0.7736687064170837
[11/26 22:26:22 detectron2]: AP for class no. 13 at wilderness 0.40: 0.8166764974594116
[11/26 22:26:22 detectron2]: AP for class no. 14 at wilderness 0.40: 0.7825345396995544
[11/26 22:26:22 detectron2]: AP for class no. 15 at wilderness 0.40: 0.45249173045158386
[11/26 22:26:22 detectron2]: AP for class no. 16 at wilderness 0.40: 0.6693383455276489
[11/26 22:26:22 detectron2]: AP for class no. 17 at wilderness 0.40: 0.6324578523635864
[11/26 22:26:22 detectron2]: AP for class no. 18 at wilderness 0.40: 0.8387308716773987
[11/26 22:26:22 detectron2]: AP for class no. 19 at wilderness 0.40: 0.726614236831665
[11/26 22:26:22 detectron2]: mAP at wilderness 0.40: 0.7054267525672913
[11/26 22:26:22 detectron2]: ************************** Performance at Wilderness level 0.50 **************************
[11/26 22:26:22 detectron2]: AP for class no. 0 at wilderness 0.50: 0.8012487888336182
[11/26 22:26:22 detectron2]: AP for class no. 1 at wilderness 0.50: 0.7758060097694397
[11/26 22:26:22 detectron2]: AP for class no. 2 at wilderness 0.50: 0.665831983089447
[11/26 22:26:22 detectron2]: AP for class no. 3 at wilderness 0.50: 0.5980117917060852
[11/26 22:26:22 detectron2]: AP for class no. 4 at wilderness 0.50: 0.5491288900375366
[11/26 22:26:22 detectron2]: AP for class no. 5 at wilderness 0.50: 0.7666521072387695
[11/26 22:26:22 detectron2]: AP for class no. 6 at wilderness 0.50: 0.7830488681793213
[11/26 22:26:22 detectron2]: AP for class no. 7 at wilderness 0.50: 0.8607292771339417
[11/26 22:26:22 detectron2]: AP for class no. 8 at wilderness 0.50: 0.5443422198295593
[11/26 22:26:22 detectron2]: AP for class no. 9 at wilderness 0.50: 0.6675994992256165
[11/26 22:26:22 detectron2]: AP for class no. 10 at wilderness 0.50: 0.5508967041969299
[11/26 22:26:22 detectron2]: AP for class no. 11 at wilderness 0.50: 0.8002014756202698
[11/26 22:26:22 detectron2]: AP for class no. 12 at wilderness 0.50: 0.7540932297706604
[11/26 22:26:22 detectron2]: AP for class no. 13 at wilderness 0.50: 0.8157699108123779
[11/26 22:26:22 detectron2]: AP for class no. 14 at wilderness 0.50: 0.7819458246231079
[11/26 22:26:22 detectron2]: AP for class no. 15 at wilderness 0.50: 0.4405312240123749
[11/26 22:26:22 detectron2]: AP for class no. 16 at wilderness 0.50: 0.6639074683189392
[11/26 22:26:22 detectron2]: AP for class no. 17 at wilderness 0.50: 0.6185230016708374
[11/26 22:26:22 detectron2]: AP for class no. 18 at wilderness 0.50: 0.836235761642456
[11/26 22:26:22 detectron2]: AP for class no. 19 at wilderness 0.50: 0.7149853110313416
[11/26 22:26:22 detectron2]: mAP at wilderness 0.50: 0.6994744539260864
[11/26 22:26:22 detectron2]: ************************** Performance at Wilderness level 0.60 **************************
[11/26 22:26:23 detectron2]: AP for class no. 0 at wilderness 0.60: 0.7966757416725159
[11/26 22:26:23 detectron2]: AP for class no. 1 at wilderness 0.60: 0.7748817801475525
[11/26 22:26:23 detectron2]: AP for class no. 2 at wilderness 0.60: 0.6601483821868896
[11/26 22:26:23 detectron2]: AP for class no. 3 at wilderness 0.60: 0.5965762138366699
[11/26 22:26:23 detectron2]: AP for class no. 4 at wilderness 0.60: 0.5440265536308289
[11/26 22:26:23 detectron2]: AP for class no. 5 at wilderness 0.60: 0.7626188397407532
[11/26 22:26:23 detectron2]: AP for class no. 6 at wilderness 0.60: 0.7800941467285156
[11/26 22:26:23 detectron2]: AP for class no. 7 at wilderness 0.60: 0.8594475388526917
[11/26 22:26:23 detectron2]: AP for class no. 8 at wilderness 0.60: 0.5392826199531555
[11/26 22:26:23 detectron2]: AP for class no. 9 at wilderness 0.60: 0.6606490612030029
[11/26 22:26:23 detectron2]: AP for class no. 10 at wilderness 0.60: 0.5300838947296143
[11/26 22:26:23 detectron2]: AP for class no. 11 at wilderness 0.60: 0.7948688864707947
[11/26 22:26:23 detectron2]: AP for class no. 12 at wilderness 0.60: 0.7452663779258728
[11/26 22:26:23 detectron2]: AP for class no. 13 at wilderness 0.60: 0.8138160109519958
[11/26 22:26:23 detectron2]: AP for class no. 14 at wilderness 0.60: 0.7807357907295227
[11/26 22:26:23 detectron2]: AP for class no. 15 at wilderness 0.60: 0.42916011810302734
[11/26 22:26:23 detectron2]: AP for class no. 16 at wilderness 0.60: 0.6592159867286682
[11/26 22:26:23 detectron2]: AP for class no. 17 at wilderness 0.60: 0.6051133871078491
[11/26 22:26:23 detectron2]: AP for class no. 18 at wilderness 0.60: 0.8345475792884827
[11/26 22:26:23 detectron2]: AP for class no. 19 at wilderness 0.60: 0.7058286070823669
[11/26 22:26:23 detectron2]: mAP at wilderness 0.60: 0.6936518549919128
[11/26 22:26:23 detectron2]: ************************** Performance at Wilderness level 0.70 **************************
[11/26 22:26:23 detectron2]: AP for class no. 0 at wilderness 0.70: 0.7958589196205139
[11/26 22:26:23 detectron2]: AP for class no. 1 at wilderness 0.70: 0.7733601927757263
[11/26 22:26:23 detectron2]: AP for class no. 2 at wilderness 0.70: 0.6554779410362244
[11/26 22:26:23 detectron2]: AP for class no. 3 at wilderness 0.70: 0.5954850316047668
[11/26 22:26:23 detectron2]: AP for class no. 4 at wilderness 0.70: 0.5386600494384766
[11/26 22:26:23 detectron2]: AP for class no. 5 at wilderness 0.70: 0.7579004764556885
[11/26 22:26:23 detectron2]: AP for class no. 6 at wilderness 0.70: 0.7753849625587463
[11/26 22:26:23 detectron2]: AP for class no. 7 at wilderness 0.70: 0.8579299449920654
[11/26 22:26:23 detectron2]: AP for class no. 8 at wilderness 0.70: 0.5357330441474915
[11/26 22:26:23 detectron2]: AP for class no. 9 at wilderness 0.70: 0.6533847451210022
[11/26 22:26:23 detectron2]: AP for class no. 10 at wilderness 0.70: 0.5144810080528259
[11/26 22:26:23 detectron2]: AP for class no. 11 at wilderness 0.70: 0.7861344814300537
[11/26 22:26:23 detectron2]: AP for class no. 12 at wilderness 0.70: 0.7389581203460693
[11/26 22:26:23 detectron2]: AP for class no. 13 at wilderness 0.70: 0.8119361996650696
[11/26 22:26:23 detectron2]: AP for class no. 14 at wilderness 0.70: 0.7797604203224182
[11/26 22:26:23 detectron2]: AP for class no. 15 at wilderness 0.70: 0.4227277636528015
[11/26 22:26:23 detectron2]: AP for class no. 16 at wilderness 0.70: 0.651122510433197
[11/26 22:26:23 detectron2]: AP for class no. 17 at wilderness 0.70: 0.5960981249809265
[11/26 22:26:23 detectron2]: AP for class no. 18 at wilderness 0.70: 0.8328948616981506
[11/26 22:26:23 detectron2]: AP for class no. 19 at wilderness 0.70: 0.6976178884506226
[11/26 22:26:23 detectron2]: mAP at wilderness 0.70: 0.6885453462600708
[11/26 22:26:23 detectron2]: ************************** Performance at Wilderness level 0.80 **************************
[11/26 22:26:23 detectron2]: AP for class no. 0 at wilderness 0.80: 0.7941478490829468
[11/26 22:26:23 detectron2]: AP for class no. 1 at wilderness 0.80: 0.7725073099136353
[11/26 22:26:23 detectron2]: AP for class no. 2 at wilderness 0.80: 0.6468381285667419
[11/26 22:26:23 detectron2]: AP for class no. 3 at wilderness 0.80: 0.5937878489494324
[11/26 22:26:23 detectron2]: AP for class no. 4 at wilderness 0.80: 0.5349993705749512
[11/26 22:26:23 detectron2]: AP for class no. 5 at wilderness 0.80: 0.7492397427558899
[11/26 22:26:23 detectron2]: AP for class no. 6 at wilderness 0.80: 0.773811399936676
[11/26 22:26:23 detectron2]: AP for class no. 7 at wilderness 0.80: 0.8561802506446838
[11/26 22:26:23 detectron2]: AP for class no. 8 at wilderness 0.80: 0.5323456525802612
[11/26 22:26:23 detectron2]: AP for class no. 9 at wilderness 0.80: 0.6361750364303589
[11/26 22:26:23 detectron2]: AP for class no. 10 at wilderness 0.80: 0.48390886187553406
[11/26 22:26:23 detectron2]: AP for class no. 11 at wilderness 0.80: 0.7755957245826721
[11/26 22:26:23 detectron2]: AP for class no. 12 at wilderness 0.80: 0.7253042459487915
[11/26 22:26:23 detectron2]: AP for class no. 13 at wilderness 0.80: 0.809639573097229
[11/26 22:26:23 detectron2]: AP for class no. 14 at wilderness 0.80: 0.7793474197387695
[11/26 22:26:23 detectron2]: AP for class no. 15 at wilderness 0.80: 0.41905367374420166
[11/26 22:26:23 detectron2]: AP for class no. 16 at wilderness 0.80: 0.6441572904586792
[11/26 22:26:23 detectron2]: AP for class no. 17 at wilderness 0.80: 0.5905238389968872
[11/26 22:26:23 detectron2]: AP for class no. 18 at wilderness 0.80: 0.8312480449676514
[11/26 22:26:23 detectron2]: AP for class no. 19 at wilderness 0.80: 0.691842794418335
[11/26 22:26:23 detectron2]: mAP at wilderness 0.80: 0.6820327639579773
[11/26 22:26:23 detectron2]: ************************** Performance at Wilderness level 0.90 **************************
[11/26 22:26:23 detectron2]: AP for class no. 0 at wilderness 0.90: 0.792736828327179
[11/26 22:26:23 detectron2]: AP for class no. 1 at wilderness 0.90: 0.7721492648124695
[11/26 22:26:23 detectron2]: AP for class no. 2 at wilderness 0.90: 0.6420716643333435
[11/26 22:26:23 detectron2]: AP for class no. 3 at wilderness 0.90: 0.5918503403663635
[11/26 22:26:23 detectron2]: AP for class no. 4 at wilderness 0.90: 0.5326953530311584
[11/26 22:26:23 detectron2]: AP for class no. 5 at wilderness 0.90: 0.7487021088600159
[11/26 22:26:23 detectron2]: AP for class no. 6 at wilderness 0.90: 0.7716047763824463
[11/26 22:26:23 detectron2]: AP for class no. 7 at wilderness 0.90: 0.854636549949646
[11/26 22:26:23 detectron2]: AP for class no. 8 at wilderness 0.90: 0.5296754240989685
[11/26 22:26:23 detectron2]: AP for class no. 9 at wilderness 0.90: 0.631446897983551
[11/26 22:26:23 detectron2]: AP for class no. 10 at wilderness 0.90: 0.4674086570739746
[11/26 22:26:23 detectron2]: AP for class no. 11 at wilderness 0.90: 0.7721824049949646
[11/26 22:26:23 detectron2]: AP for class no. 12 at wilderness 0.90: 0.7174472212791443
[11/26 22:26:23 detectron2]: AP for class no. 13 at wilderness 0.90: 0.8082637786865234
[11/26 22:26:23 detectron2]: AP for class no. 14 at wilderness 0.90: 0.7785699963569641
[11/26 22:26:23 detectron2]: AP for class no. 15 at wilderness 0.90: 0.41314542293548584
[11/26 22:26:23 detectron2]: AP for class no. 16 at wilderness 0.90: 0.6373736262321472
[11/26 22:26:23 detectron2]: AP for class no. 17 at wilderness 0.90: 0.5822394490242004
[11/26 22:26:23 detectron2]: AP for class no. 18 at wilderness 0.90: 0.8297340869903564
[11/26 22:26:23 detectron2]: AP for class no. 19 at wilderness 0.90: 0.6896778345108032
[11/26 22:26:23 detectron2]: mAP at wilderness 0.90: 0.6781805753707886
[11/26 22:26:23 detectron2]: ************************** Performance at Wilderness level 1.00 **************************
[11/26 22:26:24 detectron2]: AP for class no. 0 at wilderness 1.00: 0.7918533086776733
[11/26 22:26:24 detectron2]: AP for class no. 1 at wilderness 1.00: 0.7716333866119385
[11/26 22:26:24 detectron2]: AP for class no. 2 at wilderness 1.00: 0.6370219588279724
[11/26 22:26:24 detectron2]: AP for class no. 3 at wilderness 1.00: 0.5901530385017395
[11/26 22:26:24 detectron2]: AP for class no. 4 at wilderness 1.00: 0.5282232165336609
[11/26 22:26:24 detectron2]: AP for class no. 5 at wilderness 1.00: 0.746399462223053
[11/26 22:26:24 detectron2]: AP for class no. 6 at wilderness 1.00: 0.7705965638160706
[11/26 22:26:24 detectron2]: AP for class no. 7 at wilderness 1.00: 0.8529040813446045
[11/26 22:26:24 detectron2]: AP for class no. 8 at wilderness 1.00: 0.5244924426078796
[11/26 22:26:24 detectron2]: AP for class no. 9 at wilderness 1.00: 0.6216275691986084
[11/26 22:26:24 detectron2]: AP for class no. 10 at wilderness 1.00: 0.4555111825466156
[11/26 22:26:24 detectron2]: AP for class no. 11 at wilderness 1.00: 0.7658306360244751
[11/26 22:26:24 detectron2]: AP for class no. 12 at wilderness 1.00: 0.7069307565689087
[11/26 22:26:24 detectron2]: AP for class no. 13 at wilderness 1.00: 0.8073904514312744
[11/26 22:26:24 detectron2]: AP for class no. 14 at wilderness 1.00: 0.7774851322174072
[11/26 22:26:24 detectron2]: AP for class no. 15 at wilderness 1.00: 0.40883171558380127
[11/26 22:26:24 detectron2]: AP for class no. 16 at wilderness 1.00: 0.6260425448417664
[11/26 22:26:24 detectron2]: AP for class no. 17 at wilderness 1.00: 0.5717162489891052
[11/26 22:26:24 detectron2]: AP for class no. 18 at wilderness 1.00: 0.8286178708076477
[11/26 22:26:24 detectron2]: AP for class no. 19 at wilderness 1.00: 0.6816982626914978
[11/26 22:26:24 detectron2]: mAP at wilderness 1.00: 0.6732479333877563

Could you confirm whether the discrepancy is reasonable on 4 GPUs.

d12306 commented 3 years ago

@akshay-raj-dhamija , hello, can you explain how to obtain the Wilderness Impact in table 2 of the paper from the logging result?

Thanks,