facebookresearch / detectron2

Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
https://detectron2.readthedocs.io/en/latest/
Apache License 2.0
30.17k stars 7.43k forks source link

LazyConfig pre-trained model's showing poor performance #4211

Closed jbutle55 closed 2 years ago

jbutle55 commented 2 years ago

Instructions To Reproduce the Issue:

Running the base code below is producing the unexpected results and poor evaluation performance.

  1. Full runnable code or full changes you made:
    
    File below: test_lazy.py
    The registered coco datasets are simply the COCO datasets from https://cocodataset.org/#home but re-registered to be in a different directory structure than the expected coco/ structure on https://detectron2.readthedocs.io/en/latest/tutorials/builtin_datasets.html

<# import some common libraries import os import logging logger = logging.getLogger("detectron2")

import some common detectron2 utilities

from detectron2.data.datasets import register_coco_instances from detectron2.utils import comm from detectron2.utils.logger import setup_logger from detectron2.config import LazyConfig, instantiate from detectron2.engine import DefaultTrainer, AMPTrainer, default_writers, hooks, default_setup from detectron2.engine.defaults import create_ddp_model from detectron2.evaluation import inference_on_dataset, print_csv_format from detectron2.engine import launch, default_argument_parser from detectron2.checkpoint import DetectionCheckpointer from detectron2.model_zoo import get_config

def main(args): logger = setup_logger()

Handle COCO datasets

register_coco_instances("coco2017_train", {},
                        "coco2017/annotations/instances_train2017.json",
                        "coco2017/train2017")
register_coco_instances("coco2017_val", {},
                        "coco2017/annotations/instances_val2017.json",
                        "coco2017/val2017")

cfg = get_config("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.py", trained=True)

cfg.dataloader.train.dataset.names = ("coco2017_train",)
cfg.dataloader.test.dataset.names = ("coco2017_val",)
cfg.train.output_dir = 'Mask_Runs/plain/lazy_test'
cfg.dataloader.evaluator.output_dir = 'Mask_Runs/plain/lazy_test'
default_setup(cfg, args)
os.makedirs(cfg.train.output_dir, exist_ok=True)

model = instantiate(cfg.model)
model.to(cfg.train.device)
model = create_ddp_model(model)
DetectionCheckpointer(model).load(cfg.train.init_checkpoint)
print(do_test(cfg, model))

return

def do_test(cfg, model): if "evaluator" in cfg.dataloader: ret = inference_on_dataset( model, instantiate(cfg.dataloader.test), instantiate(cfg.dataloader.evaluator) ) print_csv_format(ret) return ret

if name == 'main': args = default_argument_parser().parse_args()

# When running multi-gpu training, must be called through launch.py
launch(main, num_gpus_per_machine=1, num_machines=1,
       dist_url=args.dist_url,
       machine_rank=0,
       args=(args,))>
2. What exact command you run:
python test_lazy.py
3. __Full logs__ or other relevant observations:
<[05/04 10:07:24 fvcore.common.checkpoint]: [Checkpointer] Loading from https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x/137260431/model_final_a54504.pkl ... [05/04 10:07:25 fvcore.common.checkpoint]: Reading a file from 'Detectron2 Model Zoo' WARNING [05/04 10:07:25 fvcore.common.checkpoint]: The checkpoint state_dict contains keys that are not used by the model: proposal_generator.anchor_generator.cell_anchors.{0, 1, 2, 3, 4} [05/04 10:07:25 d2.data.datasets.coco]: Loaded 5000 images in COCO format from coco2017/annotations/instances_val2017.json [05/04 10:07:25 d2.data.datasets.coco]: Loaded 5000 images in COCO format from coco2017/annotations/instances_val2017.json [05/04 10:07:26 d2.data.build]: Distribution of instances among all 80 categories: category #instances category #instances category #instances
person 10777 bicycle 314 car 1918
motorcycle 367 airplane 143 bus 283
train 190 truck 414 boat 424
traffic light 634 fire hydrant 101 stop sign 75
parking meter 60 bench 411 bird 427
cat 202 dog 218 horse 272
sheep 354 cow 372 elephant 252
bear 71 zebra 266 giraffe 232
backpack 371 umbrella 407 handbag 540
tie 252 suitcase 299 frisbee 115
skis 241 snowboard 69 sports ball 260
kite 327 baseball bat 145 baseball gl.. 148
skateboard 179 surfboard 267 tennis racket 225
bottle 1013 wine glass 341 cup 895
fork 215 knife 325 spoon 253
bowl 623 banana 370 apple 236
sandwich 177 orange 285 broccoli 312
carrot 365 hot dog 125 pizza 284
donut 328 cake 310 chair 1771
couch 261 potted plant 342 bed 163
dining table 695 toilet 179 tv 288
laptop 231 mouse 106 remote 283
keyboard 153 cell phone 262 microwave 55
oven 143 toaster 9 sink 225
refrigerator 126 book 1129 clock 267
vase 274 scissors 36 teddy bear 190
hair drier 11 toothbrush 57
total 36335
[05/04 10:07:26 d2.data.build]: Distribution of instances among all 80 categories: category #instances category #instances category #instances
person 10777 bicycle 314 car 1918
motorcycle 367 airplane 143 bus 283
train 190 truck 414 boat 424
traffic light 634 fire hydrant 101 stop sign 75
parking meter 60 bench 411 bird 427
cat 202 dog 218 horse 272
sheep 354 cow 372 elephant 252
bear 71 zebra 266 giraffe 232
backpack 371 umbrella 407 handbag 540
tie 252 suitcase 299 frisbee 115
skis 241 snowboard 69 sports ball 260
kite 327 baseball bat 145 baseball gl.. 148
skateboard 179 surfboard 267 tennis racket 225
bottle 1013 wine glass 341 cup 895
fork 215 knife 325 spoon 253
bowl 623 banana 370 apple 236
sandwich 177 orange 285 broccoli 312
carrot 365 hot dog 125 pizza 284
donut 328 cake 310 chair 1771
couch 261 potted plant 342 bed 163
dining table 695 toilet 179 tv 288
laptop 231 mouse 106 remote 283
keyboard 153 cell phone 262 microwave 55
oven 143 toaster 9 sink 225
refrigerator 126 book 1129 clock 267
vase 274 scissors 36 teddy bear 190
hair drier 11 toothbrush 57
total 36335
[05/04 10:07:26 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333)] [05/04 10:07:26 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333)] [05/04 10:07:26 d2.data.common]: Serializing 5000 elements to byte tensors and concatenating them all ... [05/04 10:07:26 d2.data.common]: Serializing 5000 elements to byte tensors and concatenating them all ... [05/04 10:07:26 d2.data.common]: Serialized dataset takes 19.07 MiB [05/04 10:07:26 d2.data.common]: Serialized dataset takes 19.07 MiB /home/justin.butler1/software/miniconda3/envs/detectron-env/lib/python3.9/site-packages/torch/utils/data/dataloader.py:487: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary. warnings.warn(_create_warning_msg( [05/04 10:07:27 d2.evaluation.evaluator]: Start inference on 5000 batches [05/04 10:07:27 d2.evaluation.evaluator]: Start inference on 5000 batches /home/justin.butler1/Scripts/detectron2/detectron2/structures/image_list.py:99: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). max_size = (max_size + (stride - 1)) // stride * stride /home/justin.butler1/software/miniconda3/envs/detectron-env/lib/python3.9/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1646756402876/work/aten/src/ATen/native/TensorShape.cpp:2228.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] [05/04 10:07:29 d2.evaluation.evaluator]: Inference done 11/5000. Dataloading: 0.0007 s/iter. Inference: 0.0439 s/iter. Eval: 0.0230 s/iter. Total: 0.0675 s/iter. ETA=0:05:36 [05/04 10:07:29 d2.evaluation.evaluator]: Inference done 11/5000. Dataloading: 0.0007 s/iter. Inference: 0.0439 s/iter. Eval: 0.0230 s/iter. Total: 0.0675 s/iter. ETA=0:05:36 [05/04 10:07:34 d2.evaluation.evaluator]: Inference done 86/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0236 s/iter. Total: 0.0674 s/iter. ETA=0:05:31 [05/04 10:07:34 d2.evaluation.evaluator]: Inference done 86/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0236 s/iter. Total: 0.0674 s/iter. ETA=0:05:31 [05/04 10:07:39 d2.evaluation.evaluator]: Inference done 154/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:05:41 [05/04 10:07:39 d2.evaluation.evaluator]: Inference done 154/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:05:41 [05/04 10:07:44 d2.evaluation.evaluator]: Inference done 229/5000. Dataloading: 0.0011 s/iter. Inference: 0.0426 s/iter. Eval: 0.0254 s/iter. Total: 0.0692 s/iter. ETA=0:05:30 [05/04 10:07:44 d2.evaluation.evaluator]: Inference done 229/5000. Dataloading: 0.0011 s/iter. Inference: 0.0426 s/iter. Eval: 0.0254 s/iter. Total: 0.0692 s/iter. ETA=0:05:30 [05/04 10:07:49 d2.evaluation.evaluator]: Inference done 297/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0264 s/iter. Total: 0.0703 s/iter. ETA=0:05:30 [05/04 10:07:49 d2.evaluation.evaluator]: Inference done 297/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0264 s/iter. Total: 0.0703 s/iter. ETA=0:05:30 [05/04 10:07:54 d2.evaluation.evaluator]: Inference done 366/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0267 s/iter. Total: 0.0708 s/iter. ETA=0:05:27 [05/04 10:07:54 d2.evaluation.evaluator]: Inference done 366/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0267 s/iter. Total: 0.0708 s/iter. ETA=0:05:27 [05/04 10:07:59 d2.evaluation.evaluator]: Inference done 443/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0259 s/iter. Total: 0.0698 s/iter. ETA=0:05:18 [05/04 10:07:59 d2.evaluation.evaluator]: Inference done 443/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0259 s/iter. Total: 0.0698 s/iter. ETA=0:05:18 [05/04 10:08:04 d2.evaluation.evaluator]: Inference done 516/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0258 s/iter. Total: 0.0697 s/iter. ETA=0:05:12 [05/04 10:08:04 d2.evaluation.evaluator]: Inference done 516/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0258 s/iter. Total: 0.0697 s/iter. ETA=0:05:12 [05/04 10:08:09 d2.evaluation.evaluator]: Inference done 580/5000. Dataloading: 0.0011 s/iter. Inference: 0.0431 s/iter. Eval: 0.0264 s/iter. Total: 0.0707 s/iter. ETA=0:05:12 [05/04 10:08:09 d2.evaluation.evaluator]: Inference done 580/5000. Dataloading: 0.0011 s/iter. Inference: 0.0431 s/iter. Eval: 0.0264 s/iter. Total: 0.0707 s/iter. ETA=0:05:12 [05/04 10:08:14 d2.evaluation.evaluator]: Inference done 651/5000. Dataloading: 0.0011 s/iter. Inference: 0.0431 s/iter. Eval: 0.0266 s/iter. Total: 0.0708 s/iter. ETA=0:05:08 [05/04 10:08:14 d2.evaluation.evaluator]: Inference done 651/5000. Dataloading: 0.0011 s/iter. Inference: 0.0431 s/iter. Eval: 0.0266 s/iter. Total: 0.0708 s/iter. ETA=0:05:08 [05/04 10:08:19 d2.evaluation.evaluator]: Inference done 721/5000. Dataloading: 0.0011 s/iter. Inference: 0.0431 s/iter. Eval: 0.0267 s/iter. Total: 0.0710 s/iter. ETA=0:05:03 [05/04 10:08:19 d2.evaluation.evaluator]: Inference done 721/5000. Dataloading: 0.0011 s/iter. Inference: 0.0431 s/iter. Eval: 0.0267 s/iter. Total: 0.0710 s/iter. ETA=0:05:03 [05/04 10:08:24 d2.evaluation.evaluator]: Inference done 794/5000. Dataloading: 0.0011 s/iter. Inference: 0.0431 s/iter. Eval: 0.0266 s/iter. Total: 0.0708 s/iter. ETA=0:04:57 [05/04 10:08:24 d2.evaluation.evaluator]: Inference done 794/5000. Dataloading: 0.0011 s/iter. Inference: 0.0431 s/iter. Eval: 0.0266 s/iter. Total: 0.0708 s/iter. ETA=0:04:57 [05/04 10:08:29 d2.evaluation.evaluator]: Inference done 869/5000. Dataloading: 0.0011 s/iter. Inference: 0.0430 s/iter. Eval: 0.0263 s/iter. Total: 0.0705 s/iter. ETA=0:04:51 [05/04 10:08:29 d2.evaluation.evaluator]: Inference done 869/5000. Dataloading: 0.0011 s/iter. Inference: 0.0430 s/iter. Eval: 0.0263 s/iter. Total: 0.0705 s/iter. ETA=0:04:51 [05/04 10:08:34 d2.evaluation.evaluator]: Inference done 943/5000. Dataloading: 0.0011 s/iter. Inference: 0.0430 s/iter. Eval: 0.0261 s/iter. Total: 0.0703 s/iter. ETA=0:04:45 [05/04 10:08:34 d2.evaluation.evaluator]: Inference done 943/5000. Dataloading: 0.0011 s/iter. Inference: 0.0430 s/iter. Eval: 0.0261 s/iter. Total: 0.0703 s/iter. ETA=0:04:45 [05/04 10:08:39 d2.evaluation.evaluator]: Inference done 1017/5000. Dataloading: 0.0011 s/iter. Inference: 0.0430 s/iter. Eval: 0.0260 s/iter. Total: 0.0701 s/iter. ETA=0:04:39 [05/04 10:08:39 d2.evaluation.evaluator]: Inference done 1017/5000. Dataloading: 0.0011 s/iter. Inference: 0.0430 s/iter. Eval: 0.0260 s/iter. Total: 0.0701 s/iter. ETA=0:04:39 [05/04 10:08:44 d2.evaluation.evaluator]: Inference done 1086/5000. Dataloading: 0.0011 s/iter. Inference: 0.0430 s/iter. Eval: 0.0262 s/iter. Total: 0.0703 s/iter. ETA=0:04:35 [05/04 10:08:44 d2.evaluation.evaluator]: Inference done 1086/5000. Dataloading: 0.0011 s/iter. Inference: 0.0430 s/iter. Eval: 0.0262 s/iter. Total: 0.0703 s/iter. ETA=0:04:35 [05/04 10:08:49 d2.evaluation.evaluator]: Inference done 1160/5000. Dataloading: 0.0011 s/iter. Inference: 0.0430 s/iter. Eval: 0.0261 s/iter. Total: 0.0702 s/iter. ETA=0:04:29 [05/04 10:08:49 d2.evaluation.evaluator]: Inference done 1160/5000. Dataloading: 0.0011 s/iter. Inference: 0.0430 s/iter. Eval: 0.0261 s/iter. Total: 0.0702 s/iter. ETA=0:04:29 [05/04 10:08:54 d2.evaluation.evaluator]: Inference done 1226/5000. Dataloading: 0.0011 s/iter. Inference: 0.0430 s/iter. Eval: 0.0264 s/iter. Total: 0.0705 s/iter. ETA=0:04:26 [05/04 10:08:54 d2.evaluation.evaluator]: Inference done 1226/5000. Dataloading: 0.0011 s/iter. Inference: 0.0430 s/iter. Eval: 0.0264 s/iter. Total: 0.0705 s/iter. ETA=0:04:26 [05/04 10:08:59 d2.evaluation.evaluator]: Inference done 1296/5000. Dataloading: 0.0011 s/iter. Inference: 0.0430 s/iter. Eval: 0.0264 s/iter. Total: 0.0706 s/iter. ETA=0:04:21 [05/04 10:08:59 d2.evaluation.evaluator]: Inference done 1296/5000. Dataloading: 0.0011 s/iter. Inference: 0.0430 s/iter. Eval: 0.0264 s/iter. Total: 0.0706 s/iter. ETA=0:04:21 [05/04 10:09:05 d2.evaluation.evaluator]: Inference done 1367/5000. Dataloading: 0.0011 s/iter. Inference: 0.0430 s/iter. Eval: 0.0265 s/iter. Total: 0.0707 s/iter. ETA=0:04:16 [05/04 10:09:05 d2.evaluation.evaluator]: Inference done 1367/5000. Dataloading: 0.0011 s/iter. Inference: 0.0430 s/iter. Eval: 0.0265 s/iter. Total: 0.0707 s/iter. ETA=0:04:16 [05/04 10:09:10 d2.evaluation.evaluator]: Inference done 1437/5000. Dataloading: 0.0011 s/iter. Inference: 0.0430 s/iter. Eval: 0.0266 s/iter. Total: 0.0707 s/iter. ETA=0:04:11 [05/04 10:09:10 d2.evaluation.evaluator]: Inference done 1437/5000. Dataloading: 0.0011 s/iter. Inference: 0.0430 s/iter. Eval: 0.0266 s/iter. Total: 0.0707 s/iter. ETA=0:04:11 [05/04 10:09:15 d2.evaluation.evaluator]: Inference done 1510/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0266 s/iter. Total: 0.0706 s/iter. ETA=0:04:06 [05/04 10:09:15 d2.evaluation.evaluator]: Inference done 1510/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0266 s/iter. Total: 0.0706 s/iter. ETA=0:04:06 [05/04 10:09:20 d2.evaluation.evaluator]: Inference done 1577/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0267 s/iter. Total: 0.0708 s/iter. ETA=0:04:02 [05/04 10:09:20 d2.evaluation.evaluator]: Inference done 1577/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0267 s/iter. Total: 0.0708 s/iter. ETA=0:04:02 [05/04 10:09:25 d2.evaluation.evaluator]: Inference done 1646/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0268 s/iter. Total: 0.0709 s/iter. ETA=0:03:57 [05/04 10:09:25 d2.evaluation.evaluator]: Inference done 1646/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0268 s/iter. Total: 0.0709 s/iter. ETA=0:03:57 [05/04 10:09:30 d2.evaluation.evaluator]: Inference done 1718/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0268 s/iter. Total: 0.0709 s/iter. ETA=0:03:52 [05/04 10:09:30 d2.evaluation.evaluator]: Inference done 1718/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0268 s/iter. Total: 0.0709 s/iter. ETA=0:03:52 [05/04 10:09:35 d2.evaluation.evaluator]: Inference done 1797/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0265 s/iter. Total: 0.0706 s/iter. ETA=0:03:45 [05/04 10:09:35 d2.evaluation.evaluator]: Inference done 1797/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0265 s/iter. Total: 0.0706 s/iter. ETA=0:03:45 [05/04 10:09:40 d2.evaluation.evaluator]: Inference done 1867/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0266 s/iter. Total: 0.0706 s/iter. ETA=0:03:41 [05/04 10:09:40 d2.evaluation.evaluator]: Inference done 1867/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0266 s/iter. Total: 0.0706 s/iter. ETA=0:03:41 [05/04 10:09:45 d2.evaluation.evaluator]: Inference done 1942/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0265 s/iter. Total: 0.0705 s/iter. ETA=0:03:35 [05/04 10:09:45 d2.evaluation.evaluator]: Inference done 1942/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0265 s/iter. Total: 0.0705 s/iter. ETA=0:03:35 [05/04 10:09:50 d2.evaluation.evaluator]: Inference done 2012/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0265 s/iter. Total: 0.0705 s/iter. ETA=0:03:30 [05/04 10:09:50 d2.evaluation.evaluator]: Inference done 2012/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0265 s/iter. Total: 0.0705 s/iter. ETA=0:03:30 [05/04 10:09:55 d2.evaluation.evaluator]: Inference done 2086/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0265 s/iter. Total: 0.0705 s/iter. ETA=0:03:25 [05/04 10:09:55 d2.evaluation.evaluator]: Inference done 2086/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0265 s/iter. Total: 0.0705 s/iter. ETA=0:03:25 [05/04 10:10:00 d2.evaluation.evaluator]: Inference done 2158/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0265 s/iter. Total: 0.0705 s/iter. ETA=0:03:20 [05/04 10:10:00 d2.evaluation.evaluator]: Inference done 2158/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0265 s/iter. Total: 0.0705 s/iter. ETA=0:03:20 [05/04 10:10:05 d2.evaluation.evaluator]: Inference done 2230/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0265 s/iter. Total: 0.0704 s/iter. ETA=0:03:15 [05/04 10:10:05 d2.evaluation.evaluator]: Inference done 2230/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0265 s/iter. Total: 0.0704 s/iter. ETA=0:03:15 [05/04 10:10:10 d2.evaluation.evaluator]: Inference done 2305/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:03:09 [05/04 10:10:10 d2.evaluation.evaluator]: Inference done 2305/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:03:09 [05/04 10:10:15 d2.evaluation.evaluator]: Inference done 2383/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0262 s/iter. Total: 0.0702 s/iter. ETA=0:03:03 [05/04 10:10:15 d2.evaluation.evaluator]: Inference done 2383/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0262 s/iter. Total: 0.0702 s/iter. ETA=0:03:03 [05/04 10:10:20 d2.evaluation.evaluator]: Inference done 2449/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:02:59 [05/04 10:10:20 d2.evaluation.evaluator]: Inference done 2449/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:02:59 [05/04 10:10:25 d2.evaluation.evaluator]: Inference done 2518/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:02:54 [05/04 10:10:25 d2.evaluation.evaluator]: Inference done 2518/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:02:54 [05/04 10:10:30 d2.evaluation.evaluator]: Inference done 2587/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0265 s/iter. Total: 0.0705 s/iter. ETA=0:02:50 [05/04 10:10:30 d2.evaluation.evaluator]: Inference done 2587/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0265 s/iter. Total: 0.0705 s/iter. ETA=0:02:50 [05/04 10:10:35 d2.evaluation.evaluator]: Inference done 2657/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0265 s/iter. Total: 0.0705 s/iter. ETA=0:02:45 [05/04 10:10:35 d2.evaluation.evaluator]: Inference done 2657/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0265 s/iter. Total: 0.0705 s/iter. ETA=0:02:45 [05/04 10:10:40 d2.evaluation.evaluator]: Inference done 2733/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:02:39 [05/04 10:10:40 d2.evaluation.evaluator]: Inference done 2733/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:02:39 [05/04 10:10:45 d2.evaluation.evaluator]: Inference done 2804/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:02:34 [05/04 10:10:45 d2.evaluation.evaluator]: Inference done 2804/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:02:34 [05/04 10:10:51 d2.evaluation.evaluator]: Inference done 2878/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:02:29 [05/04 10:10:51 d2.evaluation.evaluator]: Inference done 2878/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:02:29 [05/04 10:10:56 d2.evaluation.evaluator]: Inference done 2948/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:02:24 [05/04 10:10:56 d2.evaluation.evaluator]: Inference done 2948/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:02:24 [05/04 10:11:01 d2.evaluation.evaluator]: Inference done 3025/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0703 s/iter. ETA=0:02:18 [05/04 10:11:01 d2.evaluation.evaluator]: Inference done 3025/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0703 s/iter. ETA=0:02:18 [05/04 10:11:06 d2.evaluation.evaluator]: Inference done 3100/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0262 s/iter. Total: 0.0702 s/iter. ETA=0:02:13 [05/04 10:11:06 d2.evaluation.evaluator]: Inference done 3100/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0262 s/iter. Total: 0.0702 s/iter. ETA=0:02:13 [05/04 10:11:11 d2.evaluation.evaluator]: Inference done 3173/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0262 s/iter. Total: 0.0702 s/iter. ETA=0:02:08 [05/04 10:11:11 d2.evaluation.evaluator]: Inference done 3173/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0262 s/iter. Total: 0.0702 s/iter. ETA=0:02:08 [05/04 10:11:16 d2.evaluation.evaluator]: Inference done 3240/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0703 s/iter. ETA=0:02:03 [05/04 10:11:16 d2.evaluation.evaluator]: Inference done 3240/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0703 s/iter. ETA=0:02:03 [05/04 10:11:21 d2.evaluation.evaluator]: Inference done 3308/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:01:59 [05/04 10:11:21 d2.evaluation.evaluator]: Inference done 3308/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:01:59 [05/04 10:11:26 d2.evaluation.evaluator]: Inference done 3377/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:01:54 [05/04 10:11:26 d2.evaluation.evaluator]: Inference done 3377/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:01:54 [05/04 10:11:31 d2.evaluation.evaluator]: Inference done 3448/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0705 s/iter. ETA=0:01:49 [05/04 10:11:31 d2.evaluation.evaluator]: Inference done 3448/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0705 s/iter. ETA=0:01:49 [05/04 10:11:36 d2.evaluation.evaluator]: Inference done 3524/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:01:43 [05/04 10:11:36 d2.evaluation.evaluator]: Inference done 3524/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:01:43 [05/04 10:11:41 d2.evaluation.evaluator]: Inference done 3595/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:01:38 [05/04 10:11:41 d2.evaluation.evaluator]: Inference done 3595/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:01:38 [05/04 10:11:46 d2.evaluation.evaluator]: Inference done 3664/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:01:34 [05/04 10:11:46 d2.evaluation.evaluator]: Inference done 3664/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:01:34 [05/04 10:11:51 d2.evaluation.evaluator]: Inference done 3736/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:01:28 [05/04 10:11:51 d2.evaluation.evaluator]: Inference done 3736/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:01:28 [05/04 10:11:56 d2.evaluation.evaluator]: Inference done 3804/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0705 s/iter. ETA=0:01:24 [05/04 10:11:56 d2.evaluation.evaluator]: Inference done 3804/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0705 s/iter. ETA=0:01:24 [05/04 10:12:01 d2.evaluation.evaluator]: Inference done 3880/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:01:18 [05/04 10:12:01 d2.evaluation.evaluator]: Inference done 3880/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:01:18 [05/04 10:12:06 d2.evaluation.evaluator]: Inference done 3953/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:01:13 [05/04 10:12:06 d2.evaluation.evaluator]: Inference done 3953/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:01:13 [05/04 10:12:11 d2.evaluation.evaluator]: Inference done 4026/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0703 s/iter. ETA=0:01:08 [05/04 10:12:11 d2.evaluation.evaluator]: Inference done 4026/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0703 s/iter. ETA=0:01:08 [05/04 10:12:16 d2.evaluation.evaluator]: Inference done 4093/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:01:03 [05/04 10:12:16 d2.evaluation.evaluator]: Inference done 4093/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:01:03 [05/04 10:12:21 d2.evaluation.evaluator]: Inference done 4164/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:00:58 [05/04 10:12:21 d2.evaluation.evaluator]: Inference done 4164/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:00:58 [05/04 10:12:26 d2.evaluation.evaluator]: Inference done 4236/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:00:53 [05/04 10:12:26 d2.evaluation.evaluator]: Inference done 4236/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:00:53 [05/04 10:12:31 d2.evaluation.evaluator]: Inference done 4303/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0705 s/iter. ETA=0:00:49 [05/04 10:12:31 d2.evaluation.evaluator]: Inference done 4303/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0705 s/iter. ETA=0:00:49 [05/04 10:12:36 d2.evaluation.evaluator]: Inference done 4381/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:00:43 [05/04 10:12:36 d2.evaluation.evaluator]: Inference done 4381/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:00:43 [05/04 10:12:41 d2.evaluation.evaluator]: Inference done 4454/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:00:38 [05/04 10:12:41 d2.evaluation.evaluator]: Inference done 4454/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:00:38 [05/04 10:12:46 d2.evaluation.evaluator]: Inference done 4528/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0703 s/iter. ETA=0:00:33 [05/04 10:12:46 d2.evaluation.evaluator]: Inference done 4528/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0703 s/iter. ETA=0:00:33 [05/04 10:12:51 d2.evaluation.evaluator]: Inference done 4597/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:00:28 [05/04 10:12:51 d2.evaluation.evaluator]: Inference done 4597/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:00:28 [05/04 10:12:57 d2.evaluation.evaluator]: Inference done 4668/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:00:23 [05/04 10:12:57 d2.evaluation.evaluator]: Inference done 4668/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0264 s/iter. Total: 0.0704 s/iter. ETA=0:00:23 [05/04 10:13:02 d2.evaluation.evaluator]: Inference done 4742/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:00:18 [05/04 10:13:02 d2.evaluation.evaluator]: Inference done 4742/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0704 s/iter. ETA=0:00:18 [05/04 10:13:07 d2.evaluation.evaluator]: Inference done 4814/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0703 s/iter. ETA=0:00:13 [05/04 10:13:07 d2.evaluation.evaluator]: Inference done 4814/5000. Dataloading: 0.0011 s/iter. Inference: 0.0429 s/iter. Eval: 0.0263 s/iter. Total: 0.0703 s/iter. ETA=0:00:13 [05/04 10:13:12 d2.evaluation.evaluator]: Inference done 4890/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0263 s/iter. Total: 0.0703 s/iter. ETA=0:00:07 [05/04 10:13:12 d2.evaluation.evaluator]: Inference done 4890/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0263 s/iter. Total: 0.0703 s/iter. ETA=0:00:07 [05/04 10:13:17 d2.evaluation.evaluator]: Inference done 4961/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0263 s/iter. Total: 0.0703 s/iter. ETA=0:00:02 [05/04 10:13:17 d2.evaluation.evaluator]: Inference done 4961/5000. Dataloading: 0.0011 s/iter. Inference: 0.0428 s/iter. Eval: 0.0263 s/iter. Total: 0.0703 s/iter. ETA=0:00:02 [05/04 10:13:20 d2.evaluation.evaluator]: Total inference time: 0:05:51.186736 (0.070308 s / iter per device, on 1 devices) [05/04 10:13:20 d2.evaluation.evaluator]: Total inference time: 0:05:51.186736 (0.070308 s / iter per device, on 1 devices) [05/04 10:13:20 d2.evaluation.evaluator]: Total inference pure compute time: 0:03:34 (0.042847 s / iter per device, on 1 devices) [05/04 10:13:20 d2.evaluation.evaluator]: Total inference pure compute time: 0:03:34 (0.042847 s / iter per device, on 1 devices) [05/04 10:13:21 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [05/04 10:13:21 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... Mask_Runs/plain/lazy_test [05/04 10:13:21 d2.evaluation.coco_evaluation]: Saving results to Mask_Runs/plain/lazy_test/coco_instances_results.json [05/04 10:13:21 d2.evaluation.coco_evaluation]: Saving results to Mask_Runs/plain/lazy_test/coco_instances_results.json [05/04 10:13:23 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API... [05/04 10:13:23 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API... Loading and preparing results... DONE (t=0.13s) creating index... index created! [05/04 10:13:23 d2.evaluation.fast_eval_api]: Evaluate annotation type bbox [05/04 10:13:23 d2.evaluation.fast_eval_api]: Evaluate annotation type bbox [05/04 10:13:31 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 8.24 seconds. [05/04 10:13:31 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 8.24 seconds. [05/04 10:13:31 d2.evaluation.fast_eval_api]: Accumulating evaluation results... [05/04 10:13:31 d2.evaluation.fast_eval_api]: Accumulating evaluation results... [05/04 10:13:32 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.78 seconds. [05/04 10:13:32 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.78 seconds. Average Precision (AP) @[ IoU=0.50:0.95 area= all maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 area= all maxDets=100 ] = 0.001 Average Precision (AP) @[ IoU=0.75 area= all maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 area= small maxDets=100 ] = 0.001 Average Precision (AP) @[ IoU=0.50:0.95 area=medium maxDets=100 ] = 0.001 Average Precision (AP) @[ IoU=0.50:0.95 area= large maxDets=100 ] = 0.001 Average Recall (AR) @[ IoU=0.50:0.95 area= all maxDets= 1 ] = 0.008 Average Recall (AR) @[ IoU=0.50:0.95 area= all maxDets= 10 ] = 0.011 Average Recall (AR) @[ IoU=0.50:0.95 area= all maxDets=100 ] = 0.012 Average Recall (AR) @[ IoU=0.50:0.95 area= small maxDets=100 ] = 0.012 Average Recall (AR) @[ IoU=0.50:0.95 area=medium maxDets=100 ] = 0.011 Average Recall (AR) @[ IoU=0.50:0.95 area= large maxDets=100 ] = 0.015 [05/04 10:13:32 d2.evaluation.coco_evaluation]: Evaluation results for bbox: AP AP50 AP75 APs APm APl
0.041 0.077 0.038 0.070 0.054 0.055
[05/04 10:13:32 d2.evaluation.coco_evaluation]: Evaluation results for bbox: AP AP50 AP75 APs APm APl
0.041 0.077 0.038 0.070 0.054 0.055
Loading and preparing results... DONE (t=1.55s) creating index... index created! [05/04 10:13:37 d2.evaluation.fast_eval_api]: Evaluate annotation type segm [05/04 10:13:37 d2.evaluation.fast_eval_api]: Evaluate annotation type segm [05/04 10:13:48 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 10.61 seconds. [05/04 10:13:48 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 10.61 seconds. [05/04 10:13:48 d2.evaluation.fast_eval_api]: Accumulating evaluation results... [05/04 10:13:48 d2.evaluation.fast_eval_api]: Accumulating evaluation results... [05/04 10:13:49 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.79 seconds. [05/04 10:13:49 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.79 seconds. Average Precision (AP) @[ IoU=0.50:0.95 area= all maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 area= all maxDets=100 ] = 0.001 Average Precision (AP) @[ IoU=0.75 area= all maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 area= small maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 area=medium maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 area= large maxDets=100 ] = 0.001 Average Recall (AR) @[ IoU=0.50:0.95 area= all maxDets= 1 ] = 0.008 Average Recall (AR) @[ IoU=0.50:0.95 area= all maxDets= 10 ] = 0.011 Average Recall (AR) @[ IoU=0.50:0.95 area= all maxDets=100 ] = 0.011 Average Recall (AR) @[ IoU=0.50:0.95 area= small maxDets=100 ] = 0.011 Average Recall (AR) @[ IoU=0.50:0.95 area=medium maxDets=100 ] = 0.011 Average Recall (AR) @[ IoU=0.50:0.95 area= large maxDets=100 ] = 0.014 [05/04 10:13:49 d2.evaluation.coco_evaluation]: Evaluation results for segm: AP AP50 AP75 APs APm APl
0.037 0.072 0.035 0.042 0.041 0.071
[05/04 10:13:49 d2.evaluation.coco_evaluation]: Evaluation results for segm: AP AP50 AP75 APs APm APl
0.037 0.072 0.035 0.042 0.041 0.071

[05/04 10:13:50 d2.evaluation.testing]: copypaste: Task: bbox [05/04 10:13:50 d2.evaluation.testing]: copypaste: Task: bbox [05/04 10:13:50 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [05/04 10:13:50 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [05/04 10:13:50 d2.evaluation.testing]: copypaste: 0.0412,0.0772,0.0380,0.0696,0.0545,0.0550 [05/04 10:13:50 d2.evaluation.testing]: copypaste: 0.0412,0.0772,0.0380,0.0696,0.0545,0.0550 [05/04 10:13:50 d2.evaluation.testing]: copypaste: Task: segm [05/04 10:13:50 d2.evaluation.testing]: copypaste: Task: segm [05/04 10:13:50 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [05/04 10:13:50 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [05/04 10:13:50 d2.evaluation.testing]: copypaste: 0.0374,0.0720,0.0347,0.0418,0.0415,0.0708 [05/04 10:13:50 d2.evaluation.testing]: copypaste: 0.0374,0.0720,0.0347,0.0418,0.0415,0.0708 OrderedDict([('bbox', {'AP': 0.041204753759922115, 'AP50': 0.0771635532890811, 'AP75': 0.0379770943305908, 'APs': 0.06958980737201922, 'APm': 0.0544606967609001, 'APl': 0.05498009465260034}), ('segm', {'AP': 0.037437911241689624, 'AP50': 0.07199709558586864, 'AP75': 0.03473483454833331, 'APs': 0.041843848702615545, 'APm': 0.041454509260223496, 'APl': 0.07075589063798658})])>



## Expected behavior:

When evaluating the pre-trained model using code from tools/lazyconfig_train_net.py using LazyConfigs, the results are significantly worse than the expected mAP results listed in the Model Zoo

## Environment:
<2022-05-04 10:35:55 URL:https://raw.githubusercontent.com/facebookresearch/detectron2/main/detectron2/utils/collect_env.py [8391/8391] -> "collect_env.py" [1]
----------------------  -----------------------------------------------------------------------------------------------------------
sys.platform            linux
Python                  3.9.12 (main, Apr  5 2022, 06:56:58) [GCC 7.5.0]
numpy                   1.21.5
detectron2              0.4.1 @/home/justin.butler1/Scripts/detectron2/detectron2
Compiler                GCC 10.2
CUDA compiler           CUDA 11.3
detectron2 arch flags   7.0
DETECTRON2_ENV_MODULE   <not set>
PyTorch                 1.11.0 @/home/justin.butler1/software/miniconda3/envs/detectron-env/lib/python3.9/site-packages/torch
PyTorch debug build     False
GPU available           True
GPU 0                   Tesla V100-PCIE-16GB (arch=7.0)
CUDA_HOME               /global/software/cuda/cuda-11.3
Pillow                  9.0.1
torchvision             0.12.0 @/home/justin.butler1/software/miniconda3/envs/detectron-env/lib/python3.9/site-packages/torchvision
torchvision arch flags  3.5, 5.0, 6.0, 7.0, 7.5, 8.0, 8.6
fvcore                  0.1.5.post20220414
iopath                  0.1.8
cv2                     4.5.2
----------------------  -----------------------------------------------------------------------------------------------------------
PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.5.2 (Git Hash a9302535553c73243c632ad3c4c80beec3d19a1e)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 11.3
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
  - CuDNN 8.2
  - Magma 2.5.2
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.11.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, 
>
jbutle55 commented 2 years ago

After changing from my registered COCO datasets to the built-in datasets, the results matched the expected performance.

2605759123 commented 2 years ago

May be I countered same problem as you. Do you know the reasons? @jbutle55