facebookresearch / detectron2

Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
https://detectron2.readthedocs.io/en/latest/
Apache License 2.0
30.54k stars 7.49k forks source link

Unable to change input image size for training and testing #1332

Closed ghost closed 4 years ago

ghost commented 4 years ago

I declared my config parameters for input image size as follows:

cfg.INPUT.MIN_SIZE_TRAIN: (800, 832, 864, 896, 928, 960, 992, 1024) cfg.INPUT.MAX_SIZE_TRAIN: 2048 cfg.INPUT.MIN_SIZE_TEST: 1024 cfg.INPUT.MAX_SIZE_TEST: 2048

However, default input size config parameters are initialized. To verify, I inserted a print statement (look for text -> Input image size in logs) before this line. https://github.com/facebookresearch/detectron2/blob/4197baa35aa8e15326877eb44d9e6a7c452e26a7/detectron2/data/detection_utils.py#L460

Instructions To Reproduce the Issue:

  1. what changes you made (git diff) or what code you wrote
    
    #!/usr/bin/env python
    # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
    """
    Detectron2 training script with a plain training loop.

This script reads a given config file and runs the training or evaluation. It is an entry point that is able to train standard models in detectron2.

In order to let one script support training of many models, this script contains logic that are specific to these built-in models and therefore may not be suitable for your own project. For example, your research project perhaps only needs a single "evaluator".

Therefore, we recommend you to use detectron2 as a library and take this file as an example of how to use the library. You may want to write your own script with your datasets and other customizations.

Compared to "train_net.py", this script supports fewer default features. It also includes fewer abstraction, therefore is easier to add custom logic. """

You may need to restart your runtime prior to this, to let your installation take effect

Some basic setup

import detectron2

import some common libraries

import numpy as np import cv2 import random import json from detectron2.structures import BoxMode

from google.colab.patches import cv2_imshow

import logging import os from collections import OrderedDict import torch from torch.nn.parallel import DistributedDataParallel

from detectron2 import model_zoo from detectron2.data.datasets import register_coco_instances from detectron2.data import MetadataCatalog, DatasetCatalog import detectron2.utils.comm as comm from detectron2.checkpoint import DetectionCheckpointer, PeriodicCheckpointer from detectron2.config import get_cfg from detectron2.data import ( MetadataCatalog, build_detection_test_loader, build_detection_train_loader, ) from detectron2.engine import default_argument_parser, default_setup, launch from detectron2.evaluation import ( CityscapesEvaluator, COCOEvaluator, COCOPanopticEvaluator, DatasetEvaluators, LVISEvaluator, PascalVOCDetectionEvaluator, SemSegEvaluator, inference_on_dataset, print_csv_format, ) from detectron2.modeling import build_model from detectron2.solver import build_lr_scheduler, build_optimizer from detectron2.utils.events import ( CommonMetricPrinter, EventStorage, JSONWriter, TensorboardXWriter, )

logger = logging.getLogger("detectron2")

def get_evaluator(cfg, dataset_name, output_folder=None): """ Create evaluator(s) for a given dataset. This uses the special metadata "evaluator_type" associated with each builtin dataset. For your own dataset, you can simply create an evaluator manually in your script and do not have to worry about the hacky if-else logic here. """ if output_folder is None: output_folder = os.path.join(cfg.OUTPUT_DIR, "inference") evaluator_list = [] evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type if evaluator_type in ["sem_seg", "coco_panoptic_seg"]: evaluator_list.append( SemSegEvaluator( dataset_name, distributed=True, num_classes=cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES, ignore_label=cfg.MODEL.SEM_SEG_HEAD.IGNORE_VALUE, output_dir=output_folder, ) ) if evaluator_type in ["coco", "coco_panoptic_seg"]: evaluator_list.append(COCOEvaluator(dataset_name, cfg, True, output_folder)) if evaluator_type == "coco_panoptic_seg": evaluator_list.append(COCOPanopticEvaluator(dataset_name, output_folder)) if evaluator_type == "cityscapes": assert ( torch.cuda.device_count() >= comm.get_rank() ), "CityscapesEvaluator currently do not work with multiple machines." return CityscapesEvaluator(dataset_name) if evaluator_type == "pascal_voc": return PascalVOCDetectionEvaluator(dataset_name) if evaluator_type == "lvis": return LVISEvaluator(dataset_name, cfg, True, output_folder) if len(evaluator_list) == 0: raise NotImplementedError( "no Evaluator for the dataset {} with the type {}".format(dataset_name, evaluator_type) ) if len(evaluator_list) == 1: return evaluator_list[0] return DatasetEvaluators(evaluator_list)

def do_test(cfg, model): results = OrderedDict() for dataset_name in cfg.DATASETS.TEST: data_loader = build_detection_test_loader(cfg, dataset_name) evaluator = get_evaluator( cfg, dataset_name, os.path.join(cfg.OUTPUT_DIR, "inference", dataset_name) ) results_i = inference_on_dataset(model, data_loader, evaluator) results[dataset_name] = results_i if comm.is_main_process(): logger.info("Evaluation results for {} in csv format:".format(dataset_name)) print_csv_format(results_i) if len(results) == 1: results = list(results.values())[0] return results

def do_train(cfg, model, resume=False): model.train() optimizer = build_optimizer(cfg, model) scheduler = build_lr_scheduler(cfg, optimizer)

checkpointer = DetectionCheckpointer(
    model, cfg.OUTPUT_DIR, optimizer=optimizer, scheduler=scheduler
)
start_iter = (
    checkpointer.resume_or_load(cfg.MODEL.WEIGHTS, resume=resume).get("iteration", -1) + 1
)
max_iter = cfg.SOLVER.MAX_ITER

periodic_checkpointer = PeriodicCheckpointer(
    checkpointer, cfg.SOLVER.CHECKPOINT_PERIOD, max_iter=max_iter
)

writers = (
    [
        CommonMetricPrinter(max_iter),
        JSONWriter(os.path.join(cfg.OUTPUT_DIR, "metrics.json")),
        TensorboardXWriter(cfg.OUTPUT_DIR),
    ]
    if comm.is_main_process()
    else []
)

# compared to "train_net.py", we do not support accurate timing and
# precise BN here, because they are not trivial to implement
data_loader = build_detection_train_loader(cfg)
logger.info("Starting training from iteration {}".format(start_iter))
with EventStorage(start_iter) as storage:
    for data, iteration in zip(data_loader, range(start_iter, max_iter)):
        iteration = iteration + 1
        storage.step()

        loss_dict = model(data)
        losses = sum(loss_dict.values())
        assert torch.isfinite(losses).all(), loss_dict

        loss_dict_reduced = {k: v.item() for k, v in comm.reduce_dict(loss_dict).items()}
        losses_reduced = sum(loss for loss in loss_dict_reduced.values())
        if comm.is_main_process():
            storage.put_scalars(total_loss=losses_reduced, **loss_dict_reduced)

        optimizer.zero_grad()
        losses.backward()
        optimizer.step()
        storage.put_scalar("lr", optimizer.param_groups[0]["lr"], smoothing_hint=False)
        scheduler.step()

        if (
            cfg.TEST.EVAL_PERIOD > 0
            and iteration % cfg.TEST.EVAL_PERIOD == 0
            #and iteration != max_iter
        ):
            do_test(cfg, model)
            # Compared to "train_net.py", the test results are not dumped to EventStorage
            comm.synchronize()

        if iteration - start_iter > 5 and (iteration % 20 == 0 or iteration == max_iter):
            for writer in writers:
                writer.write()
        periodic_checkpointer.step(iteration)

def get_balloon_dicts(img_dir): json_file = os.path.join(img_dir, "via_region_data.json") with open(json_file) as f: imgs_anns = json.load(f)

dataset_dicts = []
for idx, v in enumerate(imgs_anns.values()):
    record = {}

    filename = os.path.join(img_dir, v["filename"])
    height, width = cv2.imread(filename).shape[:2]

    record["file_name"] = filename
    record["image_id"] = idx
    record["height"] = height
    record["width"] = width

    annos = v["regions"]
    objs = []
    for _, anno in annos.items():
        assert not anno["region_attributes"]
        anno = anno["shape_attributes"]
        px = anno["all_points_x"]
        py = anno["all_points_y"]
        poly = [(x + 0.5, y + 0.5) for x, y in zip(px, py)]
        poly = [p for x in poly for p in x]

        obj = {
            "bbox": [np.min(px), np.min(py), np.max(px), np.max(py)],
            "bbox_mode": BoxMode.XYXY_ABS,
            "segmentation": [poly],
            "category_id": 0,
            "iscrowd": 0
        }
        objs.append(obj)
    record["annotations"] = objs
    dataset_dicts.append(record)
return dataset_dicts

def setup(args): """ Create configs and perform basic setups. """

for d in ["train", "val"]:
    DatasetCatalog.register("balloon_" + d, lambda d=d: get_balloon_dicts("balloon/" + d))
    MetadataCatalog.get("balloon_" + d).set(thing_classes=["balloon"])

cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("Misc/cascade_mask_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("balloon_train",)
cfg.MODEL.MASK_ON = False
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7   # set the testing threshold for this model
cfg.DATASETS.TEST = ("balloon_val",)
MetadataCatalog.get("balloon_val").evaluator_type = "coco"
cfg.DATALOADER.NUM_WORKERS = 1
cfg.MODEL.WEIGHTS = "https://dl.fbaipublicfiles.com/detectron2/Misc/cascade_mask_rcnn_R_50_FPN_3x/144998488/model_final_480dd8.pkl"  # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 4
cfg.SOLVER.BASE_LR = 0.00025  # pick a good LR
cfg.SOLVER.MAX_ITER = 300
cfg.TEST.EVAL_PERIOD = 300

cfg.INPUT.MIN_SIZE_TRAIN: (800, 832, 864, 896, 928, 960, 992, 1024)
cfg.INPUT.MAX_SIZE_TRAIN: 2048
cfg.INPUT.MIN_SIZE_TEST: 1024
cfg.INPUT.MAX_SIZE_TEST: 2048

cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128   # faster, and good enough for this toy dataset (default: 512)
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1  # only has one class (ballon)

os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)

cfg.merge_from_list(args.opts)
cfg.freeze()
default_setup(
    cfg, args
)  # if you don't like any of the default setup, write your own setup code
return cfg

def main(args): cfg = setup(args)

model = build_model(cfg)
logger.info("Model:\n{}".format(model))
if args.eval_only:
    DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load(
        cfg.MODEL.WEIGHTS, resume=args.resume
    )
    return do_test(cfg, model)

distributed = comm.get_world_size() > 1
if distributed:
    model = DistributedDataParallel(
        model, device_ids=[comm.get_local_rank()], broadcast_buffers=False
    )

do_train(cfg, model)
print("Done")

if name == "main": args = default_argument_parser().parse_args() print("Command Line Args:", args) launch( main, args.num_gpus, num_machines=args.num_machines, machine_rank=args.machine_rank, dist_url=args.dist_url, args=(args,), )

2. what exact command you run: python /content/detectron2_repo/tools/balloon_train_net.py
3. what you observed (including __full logs__):

Command Line Args: Namespace(config_file='', dist_url='tcp://127.0.0.1:49152', eval_only=False, machine_rank=0, num_gpus=1, num_machines=1, opts=[], resume=False) [04/29 11:08:45 detectron2]: Rank of current process: 0. World size: 1 [04/29 11:08:45 detectron2]: Environment info:


sys.platform linux Python 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] numpy 1.18.3 detectron2 0.1.1 @/content/detectron2_repo/detectron2 detectron2 compiler GCC 7.5 detectron2 CUDA compiler 10.1 detectron2 arch flags sm_60 DETECTRON2_ENV_MODULE PyTorch 1.4.0+cu100 @/usr/local/lib/python3.6/dist-packages/torch PyTorch debug build False CUDA available True GPU 0 Tesla P100-PCIE-16GB CUDA_HOME /usr/local/cuda NVCC Cuda compilation tools, release 10.1, V10.1.243 Pillow 7.0.0 torchvision 0.5.0+cu100 @/usr/local/lib/python3.6/dist-packages/torchvision torchvision arch flags sm_35, sm_50, sm_60, sm_70, sm_75 fvcore 0.1.dev200424 cv2 4.1.2


PyTorch built with:

[04/29 11:08:45 detectron2]: Command line arguments: Namespace(config_file='', dist_url='tcp://127.0.0.1:49152', eval_only=False, machine_rank=0, num_gpus=1, num_machines=1, opts=[], resume=False) [04/29 11:08:45 detectron2]: Running with full config: CUDNN_BENCHMARK: False DATALOADER: ASPECT_RATIO_GROUPING: True FILTER_EMPTY_ANNOTATIONS: True NUM_WORKERS: 1 REPEAT_THRESHOLD: 0.0 SAMPLER_TRAIN: TrainingSampler DATASETS: PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000 PRECOMPUTED_PROPOSAL_TOPK_TRAIN: 2000 PROPOSAL_FILES_TEST: () PROPOSAL_FILES_TRAIN: () TEST: ('balloon_val',) TRAIN: ('balloon_train',) GLOBAL: HACK: 1.0 INPUT: CROP: ENABLED: False SIZE: [0.9, 0.9] TYPE: relative_range FORMAT: BGR MASK_FORMAT: polygon MAX_SIZE_TEST: 1333 MAX_SIZE_TRAIN: 1333 MIN_SIZE_TEST: 800 MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800) MIN_SIZE_TRAIN_SAMPLING: choice MODEL: ANCHOR_GENERATOR: ANGLES: [[-90, 0, 90]] ASPECT_RATIOS: [[0.5, 1.0, 2.0]] NAME: DefaultAnchorGenerator OFFSET: 0.0 SIZES: [[32], [64], [128], [256], [512]] BACKBONE: FREEZE_AT: 2 NAME: build_resnet_fpn_backbone DEVICE: cuda FPN: FUSE_TYPE: sum IN_FEATURES: ['res2', 'res3', 'res4', 'res5'] NORM: OUT_CHANNELS: 256 KEYPOINT_ON: False LOAD_PROPOSALS: False MASK_ON: False META_ARCHITECTURE: GeneralizedRCNN PANOPTIC_FPN: COMBINE: ENABLED: True INSTANCES_CONFIDENCE_THRESH: 0.5 OVERLAP_THRESH: 0.5 STUFF_AREA_LIMIT: 4096 INSTANCE_LOSS_WEIGHT: 1.0 PIXEL_MEAN: [103.53, 116.28, 123.675] PIXEL_STD: [1.0, 1.0, 1.0] PROPOSAL_GENERATOR: MIN_SIZE: 0 NAME: RPN RESNETS: DEFORM_MODULATED: False DEFORM_NUM_GROUPS: 1 DEFORM_ON_PER_STAGE: [False, False, False, False] DEPTH: 50 NORM: FrozenBN NUM_GROUPS: 1 OUT_FEATURES: ['res2', 'res3', 'res4', 'res5'] RES2_OUT_CHANNELS: 256 RES5_DILATION: 1 STEM_OUT_CHANNELS: 64 STRIDE_IN_1X1: True WIDTH_PER_GROUP: 64 RETINANET: BBOX_REG_WEIGHTS: (1.0, 1.0, 1.0, 1.0) FOCAL_LOSS_ALPHA: 0.25 FOCAL_LOSS_GAMMA: 2.0 IN_FEATURES: ['p3', 'p4', 'p5', 'p6', 'p7'] IOU_LABELS: [0, -1, 1] IOU_THRESHOLDS: [0.4, 0.5] NMS_THRESH_TEST: 0.5 NUM_CLASSES: 80 NUM_CONVS: 4 PRIOR_PROB: 0.01 SCORE_THRESH_TEST: 0.05 SMOOTH_L1_LOSS_BETA: 0.1 TOPK_CANDIDATES_TEST: 1000 ROI_BOX_CASCADE_HEAD: BBOX_REG_WEIGHTS: ((10.0, 10.0, 5.0, 5.0), (20.0, 20.0, 10.0, 10.0), (30.0, 30.0, 15.0, 15.0)) IOUS: (0.5, 0.6, 0.7) ROI_BOX_HEAD: BBOX_REG_WEIGHTS: (10.0, 10.0, 5.0, 5.0) CLS_AGNOSTIC_BBOX_REG: True CONV_DIM: 256 FC_DIM: 1024 NAME: FastRCNNConvFCHead NORM: NUM_CONV: 0 NUM_FC: 2 POOLER_RESOLUTION: 7 POOLER_SAMPLING_RATIO: 0 POOLER_TYPE: ROIAlignV2 SMOOTH_L1_BETA: 0.0 TRAIN_ON_PRED_BOXES: False ROI_HEADS: BATCH_SIZE_PER_IMAGE: 128 IN_FEATURES: ['p2', 'p3', 'p4', 'p5'] IOU_LABELS: [0, 1] IOU_THRESHOLDS: [0.5] NAME: CascadeROIHeads NMS_THRESH_TEST: 0.5 NUM_CLASSES: 1 POSITIVE_FRACTION: 0.25 PROPOSAL_APPEND_GT: True SCORE_THRESH_TEST: 0.7 ROI_KEYPOINT_HEAD: CONV_DIMS: (512, 512, 512, 512, 512, 512, 512, 512) LOSS_WEIGHT: 1.0 MIN_KEYPOINTS_PER_IMAGE: 1 NAME: KRCNNConvDeconvUpsampleHead NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS: True NUM_KEYPOINTS: 17 POOLER_RESOLUTION: 14 POOLER_SAMPLING_RATIO: 0 POOLER_TYPE: ROIAlignV2 ROI_MASK_HEAD: CLS_AGNOSTIC_MASK: False CONV_DIM: 256 NAME: MaskRCNNConvUpsampleHead NORM: NUM_CONV: 4 POOLER_RESOLUTION: 14 POOLER_SAMPLING_RATIO: 0 POOLER_TYPE: ROIAlignV2 RPN: BATCH_SIZE_PER_IMAGE: 256 BBOX_REG_WEIGHTS: (1.0, 1.0, 1.0, 1.0) BOUNDARY_THRESH: -1 HEAD_NAME: StandardRPNHead IN_FEATURES: ['p2', 'p3', 'p4', 'p5', 'p6'] IOU_LABELS: [0, -1, 1] IOU_THRESHOLDS: [0.3, 0.7] LOSS_WEIGHT: 1.0 NMS_THRESH: 0.7 POSITIVE_FRACTION: 0.5 POST_NMS_TOPK_TEST: 1000 POST_NMS_TOPK_TRAIN: 2000 PRE_NMS_TOPK_TEST: 1000 PRE_NMS_TOPK_TRAIN: 2000 SMOOTH_L1_BETA: 0.0 SEM_SEG_HEAD: COMMON_STRIDE: 4 CONVS_DIM: 128 IGNORE_VALUE: 255 IN_FEATURES: ['p2', 'p3', 'p4', 'p5'] LOSS_WEIGHT: 1.0 NAME: SemSegFPNHead NORM: GN NUM_CLASSES: 54 WEIGHTS: https://dl.fbaipublicfiles.com/detectron2/Misc/cascade_mask_rcnn_R_50_FPN_3x/144998488/model_final_480dd8.pkl OUTPUT_DIR: ./output SEED: -1 SOLVER: BASE_LR: 0.00025 BIAS_LR_FACTOR: 1.0 CHECKPOINT_PERIOD: 5000 CLIP_GRADIENTS: CLIP_TYPE: value CLIP_VALUE: 1.0 ENABLED: False NORM_TYPE: 2.0 GAMMA: 0.1 IMS_PER_BATCH: 4 LR_SCHEDULER_NAME: WarmupMultiStepLR MAX_ITER: 300 MOMENTUM: 0.9 NESTEROV: False STEPS: (210000, 250000) WARMUP_FACTOR: 0.001 WARMUP_ITERS: 1000 WARMUP_METHOD: linear WEIGHT_DECAY: 0.0001 WEIGHT_DECAY_BIAS: 0.0001 WEIGHT_DECAY_NORM: 0.0 TEST: AUG: ENABLED: False FLIP: True MAX_SIZE: 4000 MIN_SIZES: (400, 500, 600, 700, 800, 900, 1000, 1100, 1200) DETECTIONS_PER_IMAGE: 100 EVAL_PERIOD: 300 EXPECTED_RESULTS: [] KEYPOINT_OKS_SIGMAS: [] PRECISE_BN: ENABLED: False NUM_ITER: 200 VERSION: 2 VIS_PERIOD: 0 [04/29 11:08:45 detectron2]: Full config saved to ./output/config.yaml [04/29 11:08:45 d2.utils.env]: Using a generated random seed 45357790

GeneralizedRCNN( (backbone): FPN( (fpn_lateral2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (top_block): LastLevelMaxPool() (bottom_up): ResNet( (stem): BasicStem( (conv1): Conv2d( 3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) ) (res2): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv1): Conv2d( 64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) ) (res3): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv1): Conv2d( 256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) ) (res4): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) (conv1): Conv2d( 512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (4): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (5): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) ) (res5): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) (conv1): Conv2d( 1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) ) ) ) (proposal_generator): RPN( (anchor_generator): DefaultAnchorGenerator( (cell_anchors): BufferList() ) (rpn_head): StandardRPNHead( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (objectness_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1)) (anchor_deltas): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1)) ) ) (roi_heads): CascadeROIHeads( (box_pooler): ROIPooler( (level_poolers): ModuleList( (0): ROIAlign(output_size=(7, 7), spatial_scale=0.25, sampling_ratio=0, aligned=True) (1): ROIAlign(output_size=(7, 7), spatial_scale=0.125, sampling_ratio=0, aligned=True) (2): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True) (3): ROIAlign(output_size=(7, 7), spatial_scale=0.03125, sampling_ratio=0, aligned=True) ) ) (box_head): ModuleList( (0): FastRCNNConvFCHead( (fc1): Linear(in_features=12544, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=1024, bias=True) ) (1): FastRCNNConvFCHead( (fc1): Linear(in_features=12544, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=1024, bias=True) ) (2): FastRCNNConvFCHead( (fc1): Linear(in_features=12544, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=1024, bias=True) ) ) (box_predictor): ModuleList( (0): FastRCNNOutputLayers( (cls_score): Linear(in_features=1024, out_features=2, bias=True) (bbox_pred): Linear(in_features=1024, out_features=4, bias=True) ) (1): FastRCNNOutputLayers( (cls_score): Linear(in_features=1024, out_features=2, bias=True) (bbox_pred): Linear(in_features=1024, out_features=4, bias=True) ) (2): FastRCNNOutputLayers( (cls_score): Linear(in_features=1024, out_features=2, bias=True) (bbox_pred): Linear(in_features=1024, out_features=4, bias=True) ) ) ) ) [04/29 11:08:50 fvcore.common.checkpoint]: Loading checkpoint from https://dl.fbaipublicfiles.com/detectron2/Misc/cascade_mask_rcnn_R_50_FPN_3x/144998488/model_final_480dd8.pkl [04/29 11:08:50 fvcore.common.file_io]: URL https://dl.fbaipublicfiles.com/detectron2/Misc/cascade_mask_rcnn_R_50_FPN_3x/144998488/model_final_480dd8.pkl cached in /root/.torch/fvcore_cache/detectron2/Misc/cascade_mask_rcnn_R_50_FPN_3x/144998488/model_final_480dd8.pkl [04/29 11:08:50 fvcore.common.checkpoint]: Reading a file from 'Detectron2 Model Zoo' WARNING [04/29 11:08:50 fvcore.common.checkpoint]: 'roi_heads.box_predictor.0.cls_score.weight' has shape (81, 1024) in the checkpoint but (2, 1024) in the model! Skipped. WARNING [04/29 11:08:50 fvcore.common.checkpoint]: 'roi_heads.box_predictor.0.cls_score.bias' has shape (81,) in the checkpoint but (2,) in the model! Skipped. WARNING [04/29 11:08:50 fvcore.common.checkpoint]: 'roi_heads.box_predictor.1.cls_score.weight' has shape (81, 1024) in the checkpoint but (2, 1024) in the model! Skipped. WARNING [04/29 11:08:50 fvcore.common.checkpoint]: 'roi_heads.box_predictor.1.cls_score.bias' has shape (81,) in the checkpoint but (2,) in the model! Skipped. WARNING [04/29 11:08:50 fvcore.common.checkpoint]: 'roi_heads.box_predictor.2.cls_score.weight' has shape (81, 1024) in the checkpoint but (2, 1024) in the model! Skipped. WARNING [04/29 11:08:50 fvcore.common.checkpoint]: 'roi_heads.box_predictor.2.cls_score.bias' has shape (81,) in the checkpoint but (2,) in the model! Skipped. [04/29 11:08:50 fvcore.common.checkpoint]: Some model parameters or buffers are not in the checkpoint: roi_heads.box_predictor.1.cls_score.{weight, bias} roi_heads.box_predictor.0.cls_score.{bias, weight} roi_heads.box_predictor.2.cls_score.{bias, weight} [04/29 11:08:50 fvcore.common.checkpoint]: The checkpoint state_dict contains keys that are not used by the model: roi_heads.mask_head.mask_fcn1.{weight, bias} roi_heads.mask_head.mask_fcn2.{weight, bias} roi_heads.mask_head.mask_fcn3.{weight, bias} roi_heads.mask_head.mask_fcn4.{weight, bias} roi_heads.mask_head.deconv.{weight, bias} roi_heads.mask_head.predictor.{weight, bias} 2020-04-29 11:08:50.872195: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 [04/29 11:08:54 d2.data.build]: Removed 0 images with no usable annotations. 61 images left. [04/29 11:08:54 d2.data.build]: Distribution of instances among all 1 categories: category #instances
balloon 255

[04/29 11:08:54 d2.data.common]: Serializing 61 elements to byte tensors and concatenating them all ... [04/29 11:08:54 d2.data.common]: Serialized dataset takes 0.17 MiB

Input size check ((640, 672, 704, 736, 768, 800), 1333, 'choice')

[04/29 11:08:54 d2.data.detection_utils]: TransformGens used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()] [04/29 11:08:54 d2.data.build]: Using training sampler TrainingSampler [04/29 11:08:54 detectron2]: Starting training from iteration 0 [04/29 11:09:08 d2.utils.events]: eta: N/A iter: 20 total_loss: 3.165 loss_cls_stage0: 0.826 loss_box_reg_stage0: 0.186 loss_cls_stage1: 0.656 loss_box_reg_stage1: 0.324 loss_cls_stage2: 0.728 loss_box_reg_stage2: 0.376 loss_rpn_cls: 0.042 loss_rpn_loc: 0.014 lr: 0.000005 max_mem: 5104M [04/29 11:09:21 d2.utils.events]: eta: 0:02:51 iter: 40 total_loss: 3.097 loss_cls_stage0: 0.782 loss_box_reg_stage0: 0.179 loss_cls_stage1: 0.634 loss_box_reg_stage1: 0.310 loss_cls_stage2: 0.712 loss_box_reg_stage2: 0.391 loss_rpn_cls: 0.031 loss_rpn_loc: 0.011 lr: 0.000010 max_mem: 5104M [04/29 11:09:35 d2.utils.events]: eta: 0:02:40 iter: 60 total_loss: 2.858 loss_cls_stage0: 0.724 loss_box_reg_stage0: 0.180 loss_cls_stage1: 0.607 loss_box_reg_stage1: 0.282 loss_cls_stage2: 0.659 loss_box_reg_stage2: 0.326 loss_rpn_cls: 0.034 loss_rpn_loc: 0.014 lr: 0.000015 max_mem: 5172M [04/29 11:09:48 d2.utils.events]: eta: 0:02:27 iter: 80 total_loss: 2.521 loss_cls_stage0: 0.638 loss_box_reg_stage0: 0.172 loss_cls_stage1: 0.564 loss_box_reg_stage1: 0.235 loss_cls_stage2: 0.597 loss_box_reg_stage2: 0.326 loss_rpn_cls: 0.027 loss_rpn_loc: 0.010 lr: 0.000020 max_mem: 5172M [04/29 11:10:01 d2.utils.events]: eta: 0:02:13 iter: 100 total_loss: 2.438 loss_cls_stage0: 0.555 loss_box_reg_stage0: 0.158 loss_cls_stage1: 0.511 loss_box_reg_stage1: 0.287 loss_cls_stage2: 0.529 loss_box_reg_stage2: 0.367 loss_rpn_cls: 0.032 loss_rpn_loc: 0.013 lr: 0.000025 max_mem: 5277M [04/29 11:10:14 d2.utils.events]: eta: 0:01:57 iter: 120 total_loss: 2.312 loss_cls_stage0: 0.489 loss_box_reg_stage0: 0.173 loss_cls_stage1: 0.473 loss_box_reg_stage1: 0.290 loss_cls_stage2: 0.472 loss_box_reg_stage2: 0.395 loss_rpn_cls: 0.027 loss_rpn_loc: 0.012 lr: 0.000030 max_mem: 5277M [04/29 11:10:28 d2.utils.events]: eta: 0:01:47 iter: 140 total_loss: 2.048 loss_cls_stage0: 0.439 loss_box_reg_stage0: 0.136 loss_cls_stage1: 0.423 loss_box_reg_stage1: 0.248 loss_cls_stage2: 0.426 loss_box_reg_stage2: 0.364 loss_rpn_cls: 0.028 loss_rpn_loc: 0.011 lr: 0.000035 max_mem: 5277M [04/29 11:10:41 d2.utils.events]: eta: 0:01:32 iter: 160 total_loss: 1.905 loss_cls_stage0: 0.394 loss_box_reg_stage0: 0.132 loss_cls_stage1: 0.377 loss_box_reg_stage1: 0.255 loss_cls_stage2: 0.381 loss_box_reg_stage2: 0.314 loss_rpn_cls: 0.027 loss_rpn_loc: 0.009 lr: 0.000040 max_mem: 5277M [04/29 11:10:54 d2.utils.events]: eta: 0:01:16 iter: 180 total_loss: 1.780 loss_cls_stage0: 0.353 loss_box_reg_stage0: 0.138 loss_cls_stage1: 0.344 loss_box_reg_stage1: 0.246 loss_cls_stage2: 0.336 loss_box_reg_stage2: 0.338 loss_rpn_cls: 0.021 loss_rpn_loc: 0.008 lr: 0.000045 max_mem: 5277M [04/29 11:11:07 d2.utils.events]: eta: 0:01:03 iter: 200 total_loss: 1.658 loss_cls_stage0: 0.315 loss_box_reg_stage0: 0.141 loss_cls_stage1: 0.305 loss_box_reg_stage1: 0.221 loss_cls_stage2: 0.310 loss_box_reg_stage2: 0.292 loss_rpn_cls: 0.025 loss_rpn_loc: 0.013 lr: 0.000050 max_mem: 5278M [04/29 11:11:20 d2.utils.events]: eta: 0:00:54 iter: 220 total_loss: 1.640 loss_cls_stage0: 0.283 loss_box_reg_stage0: 0.139 loss_cls_stage1: 0.268 loss_box_reg_stage1: 0.267 loss_cls_stage2: 0.270 loss_box_reg_stage2: 0.398 loss_rpn_cls: 0.014 loss_rpn_loc: 0.009 lr: 0.000055 max_mem: 5278M [04/29 11:11:33 d2.utils.events]: eta: 0:00:39 iter: 240 total_loss: 1.569 loss_cls_stage0: 0.267 loss_box_reg_stage0: 0.144 loss_cls_stage1: 0.252 loss_box_reg_stage1: 0.254 loss_cls_stage2: 0.249 loss_box_reg_stage2: 0.394 loss_rpn_cls: 0.025 loss_rpn_loc: 0.011 lr: 0.000060 max_mem: 5278M [04/29 11:11:46 d2.utils.events]: eta: 0:00:25 iter: 260 total_loss: 1.403 loss_cls_stage0: 0.239 loss_box_reg_stage0: 0.128 loss_cls_stage1: 0.219 loss_box_reg_stage1: 0.216 loss_cls_stage2: 0.222 loss_box_reg_stage2: 0.295 loss_rpn_cls: 0.015 loss_rpn_loc: 0.010 lr: 0.000065 max_mem: 5278M [04/29 11:12:00 d2.utils.events]: eta: 0:00:13 iter: 280 total_loss: 1.355 loss_cls_stage0: 0.227 loss_box_reg_stage0: 0.129 loss_cls_stage1: 0.194 loss_box_reg_stage1: 0.231 loss_cls_stage2: 0.193 loss_box_reg_stage2: 0.356 loss_rpn_cls: 0.021 loss_rpn_loc: 0.009 lr: 0.000070 max_mem: 5278M [04/29 11:12:12 fvcore.common.checkpoint]: Saving checkpoint to ./output/model_final.pth [04/29 11:12:15 d2.data.build]: Distribution of instances among all 1 categories: category #instances
balloon 50

[04/29 11:12:15 d2.data.common]: Serializing 13 elements to byte tensors and concatenating them all ... [04/29 11:12:15 d2.data.common]: Serialized dataset takes 0.04 MiB

Input size check (800, 1333, 'choice')

WARNING [04/29 11:12:15 d2.evaluation.coco_evaluation]: json_file was not found in MetaDataCatalog for 'balloon_val'. Trying to convert it to COCO format ... WARNING [04/29 11:12:15 d2.data.datasets.coco]: Using previously cached COCO format annotations at './output/inference/balloon_val/balloon_val_coco_format.json'. You need to clear the cache file if your dataset has been modified. [04/29 11:12:15 d2.evaluation.evaluator]: Start inference on 13 images [04/29 11:12:17 d2.evaluation.evaluator]: Inference done 11/13. 0.0919 s / img. ETA=0:00:00 [04/29 11:12:17 d2.evaluation.evaluator]: Total inference time: 0:00:00.791532 (0.098941 s / img per device, on 1 devices) [04/29 11:12:17 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:00 (0.090434 s / img per device, on 1 devices) [04/29 11:12:17 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [04/29 11:12:17 d2.evaluation.coco_evaluation]: Saving results to ./output/inference/balloon_val/coco_instances_results.json [04/29 11:12:17 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=0.02s). Accumulating evaluation results... DONE (t=0.01s). Average Precision (AP) @[ IoU=0.50:0.95 area= all maxDets=100 ] = 0.628 Average Precision (AP) @[ IoU=0.50 area= all maxDets=100 ] = 0.683 Average Precision (AP) @[ IoU=0.75 area= all maxDets=100 ] = 0.683 Average Precision (AP) @[ IoU=0.50:0.95 area= small maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 area=medium maxDets=100 ] = 0.362 Average Precision (AP) @[ IoU=0.50:0.95 area= large maxDets=100 ] = 0.842 Average Recall (AR) @[ IoU=0.50:0.95 area= all maxDets= 1 ] = 0.230 Average Recall (AR) @[ IoU=0.50:0.95 area= all maxDets= 10 ] = 0.634 Average Recall (AR) @[ IoU=0.50:0.95 area= all maxDets=100 ] = 0.634 Average Recall (AR) @[ IoU=0.50:0.95 area= small maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 area=medium maxDets=100 ] = 0.359 Average Recall (AR) @[ IoU=0.50:0.95 area= large maxDets=100 ] = 0.853 [04/29 11:12:17 d2.evaluation.coco_evaluation]: Evaluation results for bbox: AP AP50 AP75 APs APm APl
62.799 68.317 68.317 0.000 36.238 84.157

[04/29 11:12:17 detectron2]: Evaluation results for balloon_val in csv format: [04/29 11:12:17 d2.evaluation.testing]: copypaste: Task: bbox [04/29 11:12:17 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [04/29 11:12:17 d2.evaluation.testing]: copypaste: 62.7986,68.3168,68.3168,0.0000,36.2376,84.1575 [04/29 11:12:17 d2.utils.events]: eta: 0:00:00 iter: 300 total_loss: 1.231 loss_cls_stage0: 0.208 loss_box_reg_stage0: 0.127 loss_cls_stage1: 0.173 loss_box_reg_stage1: 0.233 loss_cls_stage2: 0.172 loss_box_reg_stage2: 0.311 loss_rpn_cls: 0.014 loss_rpn_loc: 0.009 lr: 0.000075 max_mem: 5278M [04/29 11:12:17 fvcore.common.checkpoint]: Saving checkpoint to ./output/model_final.pth Done


4. please also simplify the steps as much as possible so they do not require additional resources to
     run, such as a private dataset.

Run the above balloon_train_net.py using Balloons dataset on colab.

## Expected behavior:

Should be able to change the following parameters:
1) cfg.INPUT.MIN_SIZE_TRAIN: (800, 832, 864, 896, 928, 960, 992, 1024)
2) cfg.INPUT.MAX_SIZE_TRAIN: 2048
3) cfg.INPUT.MIN_SIZE_TEST: 1024
4) cfg.INPUT.MAX_SIZE_TEST: 2048
ghost commented 4 years ago

I have resolved this problem. Closing this issue.

fkthi commented 4 years ago

how was ist resolved?