Closed leesangjoon1 closed 2 years ago
You can modify the img_scale in Resize pipeline to change the input_size.
You can modify the img_scale in Resize pipeline to change the input_size.
yeah I resize the input image size but I can run the training that size in detectron2 without out of memory but in mmdet it become out of memory so I don't what is the problem.
Could you provide pipeline you used.
Could you provide pipeline you used.
This is coco_detection.py file In that file, the input_size is 1333, 800 but in detectron2 I train input_size 2500, 2500 So I don't know what do I change And I heard that the samples_per_gpu mean Batch_size. But I tried the batch size 32 in detectron. but it alos occur out of memory. Please help me
dataset_type = 'CocoDataset' data_root = '/home/sangjoon/Yet-Another-EfficientDet-Pytorch/datasets/coco_white/' img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True), dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), dict(type='RandomFlip', flip_ratio=0.5), dict(type='Normalize', img_norm_cfg), dict(type='Pad', size_divisor=32), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), ] test_pipeline = [ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(1333, 800), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict(type='Normalize', img_norm_cfg), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']), ]) ] data = dict( samples_per_gpu=2, workers_per_gpu=2, train=dict( type=dataset_type, ann_file=data_root + 'annotations/instances_train2017.json', img_prefix=data_root + 'train2017/', pipeline=train_pipeline), val=dict( type=dataset_type, ann_file=data_root + 'annotations/instances_val2017.json', img_prefix=data_root + 'val2017/', pipeline=test_pipeline), test=dict( type=dataset_type, ann_file=data_root + 'annotations/instances_val2017.json', img_prefix=data_root + 'val2017/', pipeline=test_pipeline)) evaluation = dict(interval=1, metric='bbox')
According to the previous comparison, the training memory of mmdetection and detectron2 are basically identical. Please check if the running conditions are the same.
For out of memory. You can reduce the sample_per_gpu
and further reduce the input image size.
This issue will be closed due to inactivity. Feel free to reopen it if you have any questions.
In detectron2, the config file has the input_size control code before resize the input_size
but how to do that in mmdet??
Due to the memory cost, now I am having trouble in training the deep learing.
Please help me :)