Open 827346462 opened 1 year ago
Still really don't get it. 🤦♂️
You need to specify the optim_wrapper and train_cfg.
So the bug is ?
I add train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=12, val_interval=1) optim_wrapper = dict( type='OptimWrapper', optimizer=dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)) test_cfg = dict(type='TestLoop')
its show:
TypeError: class EpochBasedTrainLoop
in mmengine/runner/loops.py: class ConcatDataset
in mmengine/dataset/dataset_wrapper.py: class CocoDataset
in mmdet/datasets/coco.py: class MultiBranch
in mmdet/datasets/transforms/wrappers.py: init() missing 1 required positional argument: 'branch_field'
Only support IterBasedTrainLoop
, for existing two datasets: labled and unlabled.
Only support
IterBasedTrainLoop
, for existing two datasets: labled and unlabled.
i follow this index https://mmdetection.readthedocs.io/zh_CN/3.x/user_guides/semi_det.html. this guides say use semi_train_cfg and semi_test_cfg .
Still can't get through.
TypeError: class IterBasedTrainLoop
in mmengine/runner/loops.py: class ConcatDataset
in mmengine/dataset/dataset_wrapper.py: class CocoDataset
in mmdet/datasets/coco.py: class MultiBranch
in mmdet/datasets/transforms/wrappers.py: init() missing 1 required positional argument: 'branch_field'
Only support , for existing two datasets: labled and unlabled.
IterBasedTrainLoop
i follow this index https://mmdetection.readthedocs.io/zh_CN/3.x/user_guides/semi_det.html. this guides say use semi_train_cfg and semi_test_cfg .
Still can't get through.
TypeError: class in mmengine/runner/loops.py: class in mmengine/dataset/dataset_wrapper.py: class in mmdet/datasets/coco.py: class in mmdet/datasets/transforms/wrappers.py: init() missing 1 required positional argument: 'branch_field'
IterBasedTrainLoop``ConcatDataset``CocoDataset``MultiBranch
Only support , for existing two datasets: labled and unlabled.
IterBasedTrainLoop
I look one day . still not. now its show branch_field not .but cfg have branch_field = ['sup', 'unsup_teacher', 'unsup_student']
Traceback (most recent call last):
File "/home/sevnce/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmengine/registry/build_functions.py", line 122, in build_from_cfg
obj = obj_cls(**args) # type: ignore
TypeError: init() missing 1 required positional argument: 'branch_field'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sevnce/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmengine/registry/build_functions.py", line 122, in build_from_cfg
obj = obj_cls(args) # type: ignore
File "/sevnce/mmlab/mmdetection-3.0.0/mmdet/datasets/base_det_dataset.py", line 40, in init
super().init(*args, *kwargs)
File "/home/sevnce/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmengine/dataset/base_dataset.py", line 242, in init
self.pipeline = Compose(pipeline)
File "/home/sevnce/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmengine/dataset/base_dataset.py", line 36, in init
transform = TRANSFORMS.build(transform)
File "/home/sevnce/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmengine/registry/registry.py", line 548, in build
return self.build_func(cfg, args, kwargs, registry=self)
File "/home/sevnce/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmengine/registry/build_functions.py", line 145, in build_from_cfg
f'class {obj_cls.__name__}
in ' # type: ignore
TypeError: class MultiBranch
in mmdet/datasets/transforms/wrappers.py: init() missing 1 required positional argument: 'branch_field'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sevnce/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmengine/registry/build_functions.py", line 122, in build_from_cfg
obj = obj_cls(*args) # type: ignore
File "/home/sevnce/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmengine/dataset/dataset_wrapper.py", line 46, in init
self.datasets.append(DATASETS.build(dataset))
File "/home/sevnce/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmengine/registry/registry.py", line 548, in build
return self.build_func(cfg, args, **kwargs, registry=self)
File "/home/sevnce/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmengine/registry/build_functions.py", line 145, in build_from_cfg
f'class {obj_cls.__name__}
in ' # type: ignore
TypeError: class CocoDataset
in mmdet/datasets/coco.py: class MultiBranch
in mmdet/datasets/transforms/wrappers.py: init() missing 1 required positional argument: 'branch_field'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sevnce/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmengine/registry/build_functions.py", line 122, in build_from_cfg
obj = obj_cls(*args) # type: ignore
File "/home/sevnce/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmengine/runner/loops.py", line 219, in init
super().init(runner, dataloader)
File "/home/sevnce/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmengine/runner/base_loop.py", line 27, in init
dataloader, seed=runner.seed, diff_rank_seed=diff_rank_seed)
File "/home/sevnce/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmengine/runner/runner.py", line 1346, in build_dataloader
dataset = DATASETS.build(dataset_cfg)
File "/home/sevnce/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmengine/registry/registry.py", line 548, in build
return self.build_func(cfg, args, **kwargs, registry=self)
File "/home/sevnce/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmengine/registry/build_functions.py", line 145, in build_from_cfg
f'class {obj_cls.__name__}
in ' # type: ignore
TypeError: class ConcatDataset
in mmengine/dataset/dataset_wrapper.py: class CocoDataset
in mmdet/datasets/coco.py: class MultiBranch
in mmdet/datasets/transforms/wrappers.py: init() missing 1 required positional argument: 'branch_field'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./tools/train.py", line 133, in {obj_cls.__name__}
in ' # type: ignore
TypeError: class IterBasedTrainLoop
in mmengine/runner/loops.py: class ConcatDataset
in mmengine/dataset/dataset_wrapper.py: class CocoDataset
in mmdet/datasets/coco.py: class MultiBranch
in mmdet/datasets/transforms/wrappers.py: init() missing 1 required positional argument: 'branch_field'
So the bug is ?
this index https://mmdetection.readthedocs.io/en/v3.0.0rc0/user_guides/semi_det.html#configure-meanteacherhook
sup_pipeline = [ ### dict(type='LoadImageFromFile', file_client_args=file_client_args), dict(type='LoadAnnotations', with_bbox=True), dict(type='RandomResize', scale=scale, keep_ratio=True), dict(type='RandomFlip', prob=0.5), dict(type='RandAugment', aug_space=color_space, aug_num=1), dict(type='FilterAnnotations', min_gt_bbox_wh=(1e-2, 1e-2)), dict(type='MultiBranch', sup=dict(type='PackDetInputs')) ]
dict(type='LoadImageFromFile', file_client_args=file_client_args) no define file_client_args
now cfg is:
_base_ = [
'../_base_/models/faster-rcnn_r50_fpn.py', '../_base_/default_runtime.py',
'../_base_/datasets/semi_coco_detection.py'
]
detector = _base_.model
detector.data_preprocessor = dict(
type='DetDataPreprocessor',
mean=[103.530, 116.280, 123.675],
std=[1.0, 1.0, 1.0],
bgr_to_rgb=False,
pad_size_divisor=32)
detector.backbone = dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=False),
norm_eval=True,
style='caffe',
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron2/resnet50_caffe'))
model = dict(
_delete_=True,
type='SoftTeacher',
detector=detector,
data_preprocessor=dict(
type='MultiBranchDataPreprocessor',
data_preprocessor=detector.data_preprocessor),
semi_train_cfg=dict(
freeze_teacher=True,
sup_weight=1.0,
unsup_weight=4.0,
pseudo_label_initial_score_thr=0.5,
rpn_pseudo_thr=0.9,
cls_pseudo_thr=0.9,
reg_pseudo_thr=0.02,
jitter_times=10,
jitter_scale=0.06,
min_pseudo_bbox_wh=(1e-2, 1e-2)),
semi_test_cfg=dict(predict_on='teacher'))
custom_hooks = [dict(type='MeanTeacherHook')]
val_cfg = dict(type='TeacherStudentValLoop')
backend_args = None
color_space = [
[dict(type='ColorTransform')],
[dict(type='AutoContrast')],
[dict(type='Equalize')],
[dict(type='Sharpness')],
[dict(type='Posterize')],
[dict(type='Solarize')],
[dict(type='Color')],
[dict(type='Contrast')],
[dict(type='Brightness')],
]
geometric = [
[dict(type='Rotate')],
[dict(type='ShearX')],
[dict(type='ShearY')],
[dict(type='TranslateX')],
[dict(type='TranslateY')],
]
train_cfg = dict(type='IterBasedTrainLoop', max_iters=20000, val_interval=10000)
val_cfg = dict(type='ValLoop')
test_cfg = dict(type='TestLoop')
optim_wrapper = dict(
type='OptimWrapper',
optimizer=dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001))
scale = [(1333, 400), (1333, 1200)]
dataset_type = 'CocoDataset'
data_root = 'data/coco/'
batch_size = 4
num_workers = 5
branch_field = ['sup', 'unsup_teacher', 'unsup_student']
sup_pipeline = [
dict(type='LoadImageFromFile', backend_args=backend_args),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='RandomResize', scale=scale, keep_ratio=True),
dict(type='RandomFlip', prob=0.5),
dict(type='RandAugment', aug_space=color_space, aug_num=1),
dict(type='FilterAnnotations', min_gt_bbox_wh=(1e-2, 1e-2)),
dict(type='MultiBranch', branch_field=branch_field,sup=dict(type='PackDetInputs'))
]
weak_pipeline = [
dict(type='RandomResize', scale=scale, keep_ratio=True),
dict(type='RandomFlip', prob=0.5),
dict(
type='PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor', 'flip', 'flip_direction',
'homography_matrix')),
]
strong_pipeline = [
dict(type='RandomResize', scale=scale, keep_ratio=True),
dict(type='RandomFlip', prob=0.5),
dict(
type='RandomOrder',
transforms=[
dict(type='RandAugment', aug_space=color_space, aug_num=1),
dict(type='RandAugment', aug_space=geometric, aug_num=1),
]),
dict(type='RandomErasing', n_patches=(1, 5), ratio=(0, 0.2)),
dict(type='FilterAnnotations', min_gt_bbox_wh=(1e-2, 1e-2)),
dict(
type='PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor', 'flip', 'flip_direction',
'homography_matrix')),
]
unsup_pipeline = [
dict(type='LoadImageFromFile', file_client_args=backend_args),
dict(type='LoadEmptyAnnotations'),
dict(
type='MultiBranch',
branch_field=branch_field,
unsup_teacher=weak_pipeline,
unsup_student=strong_pipeline,
)
]
labeled_dataset = dict(
type=dataset_type,
data_root=data_root,
ann_file='annotations/instances_train2017.json',
data_prefix=dict(img='train2017/'),
filter_cfg=dict(filter_empty_gt=True, min_size=32),
pipeline=sup_pipeline)
unlabeled_dataset = dict(
type=dataset_type,
data_root=data_root,
ann_file='annotations/instances_unlabeled2017.json',
data_prefix=dict(img='unlabeled2017/'),
filter_cfg=dict(filter_empty_gt=False),
pipeline=unsup_pipeline)
train_dataloader = dict(
batch_size=batch_size,
num_workers=num_workers,
persistent_workers=True,
sampler=dict(
type='GroupMultiSourceSampler',
batch_size=batch_size,
source_ratio=[1, 4]),
dataset=dict(
type='ConcatDataset', datasets=[labeled_dataset, unlabeled_dataset]))
transforms.py is Error:
Traceback (most recent call last):
File "./tools/train.py", line 133, in
Have you ever solve the problem? @827346462 @Czm369
I have met the same error of none pseudo labels
RuntimeError: cannot perform reduction function min on tensor with no elements because the operation does not have an identity
The traceback is:
Traceback (most recent call last):
File "/raid/xxx/Code/mmdetection/tools/train.py", line 133, in <module>
main()
File "/raid/xxx/Code/mmdetection/tools/train.py", line 129, in main
runner.train()
File "/raid/xxx/Code/mmengine/mmengine/runner/runner.py", line 1745, in train
model = self.train_loop.run() # type: ignore
File "/raid/xxx/Code/mmengine/mmengine/runner/loops.py", line 278, in run
self.run_iter(data_batch)
File "/raid/xxx/Code/mmengine/mmengine/runner/loops.py", line 301, in run_iter
outputs = self.runner.model.train_step(
File "/raid/xxx/Code/mmengine/mmengine/model/wrappers/distributed.py", line 121, in train_step
losses = self._run_forward(data, mode='loss')
File "/raid/xxx/Code/mmengine/mmengine/model/wrappers/distributed.py", line 161, in _run_forward
results = self(**data, mode=mode)
File "/raid/xxx/anaconda3/envs/mmlab/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/raid/xxx/anaconda3/envs/mmlab/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 705, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/raid/xxx/anaconda3/envs/mmlab/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/raid/xxx/Code/mmdetection/mmdet/models/detectors/base.py", line 92, in forward
return self.loss(inputs, data_samples)
File "/raid/xxx/Code/mmdetection/mmdet/models/detectors/semi_base.py", line 80, in loss
origin_pseudo_data_samples, batch_info = self.get_pseudo_instances(
File "/raid/xxx/anaconda3/envs/mmlab/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/raid/xxx/Code/mmdetection/mmdet/models/detectors/soft_teacher.py", line 118, in get_pseudo_instances
data_samples.gt_instances.bboxes = bbox_project(
File "/raid/xxx/Code/mmdetection/mmdet/structures/bbox/transforms.py", line 347, in bbox_project
bboxes = corner2bbox(corners)
File "/raid/xxx/Code/mmdetection/mmdet/structures/bbox/transforms.py", line 316, in corner2bbox
min_xy = corners.min(dim=1)[0]
RuntimeError: cannot perform reduction function min on tensor with no elements because the operation does not have an identity
env: mmdetection 3.0.0 os :18.04 mmcv:2.0.0
i follow this index https://github.com/open-mmlab/mmdetection/blob/v3.0.0/docs/en/user_guides/semi_det.md
i make cocodataset . run python ./tools/train.py ./configs/semi_det/semi_test.py (file is own) show erro: Traceback (most recent call last): File "./tools/train.py", line 133, in
main()
File "./tools/train.py", line 122, in main
runner = Runner.from_cfg(cfg)
File "/home/sevnce/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmengine/runner/runner.py", line 466, in from_cfg
cfg=cfg,
File "/home/sevnce/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmengine/runner/runner.py", line 291, in init
'train_dataloader, train_cfg, and optim_wrapper should be '
ValueError: train_dataloader, train_cfg, and optim_wrapper should be either all None or not None, but got train_dataloader={'batch_size': 4, 'num_workers': 2, 'persistent_workers': True, 'sampler': {'type': 'GroupMultiSourceSampler', 'batch_size': 4, 'source_ratio': [1, 4]}, 'dataset': {'type': 'ConcatDataset', 'datasets': [{'type': 'CocoDataset', 'data_root': 'data/coco/', 'ann_file': 'annotations/instances_train2017.json', 'data_prefix': {'img': 'train2017/'}, 'filter_cfg': {'filter_empty_gt': True, 'min_size': 32}, 'pipeline': [{'type': 'LoadImageFromFile', 'backend_args': None}, {'type': 'LoadAnnotations', 'with_bbox': True}, {'type': 'RandomResize', 'scale': (1333, 800), 'keep_ratio': True}, {'type': 'RandomFlip', 'prob': 0.5}, {'type': 'RandAugment', 'aug_space': None, 'aug_num': 1}, {'type': 'FilterAnnotations', 'min_gt_bbox_wh': (0.01, 0.01)}, {'type': 'MultiBranch', 'sup': {'type': 'PackDetInputs'}}]}, {'type': 'CocoDataset', 'data_root': 'data/coco/', 'ann_file': 'annotations/instances_unlabeled2017.json', 'data_prefix': {'img': 'unlabeled2017/'}, 'filter_cfg': {'filter_empty_gt': False}, 'pipeline': [{'type': 'LoadImageFromFile', 'backend_args': None}, {'type': 'LoadEmptyAnnotations'}, {'type': 'MultiBranch', 'unsup_teacher': [{'type': 'RandomResize', 'scale': (1333, 800), 'keep_ratio': True}, {'type': 'RandomFlip', 'prob': 0.5}, {'type': 'PackDetInputs', 'meta_keys': ('img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', 'flip', 'flip_direction', 'homography_matrix')}], 'unsup_student': [{'type': 'RandomResize', 'scale': (1333, 800), 'keep_ratio': True}, {'type': 'RandomFlip', 'prob': 0.5}, {'type': 'RandomOrder', 'transforms': [{'type': 'RandAugment', 'aug_space': None, 'aug_num': 1}, {'type': 'RandAugment', 'aug_space': None, 'aug_num': 1}]}, {'type': 'RandomErasing', 'n_patches': (1, 5), 'ratio': (0, 0.2)}, {'type': 'FilterAnnotations', 'min_gt_bbox_wh': (0.01, 0.01)}, {'type': 'PackDetInputs', 'meta_keys': ('img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', 'flip', 'flip_direction', 'homography_matrix')}]}]}]}}, train_cfg=None, optim_wrapper=None.
this is cfg,semi_test.py: base = [ '../base/models/faster-rcnn_r50_fpn.py', '../base/default_runtime.py', '../base/datasets/semi_coco_detection.py' ]
detector = base.model detector.data_preprocessor = dict( type='DetDataPreprocessor', mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], bgr_to_rgb=False, pad_size_divisor=32) detector.backbone = dict( type='ResNet', depth=50, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, norm_cfg=dict(type='BN', requires_grad=False), norm_eval=True, style='caffe', init_cfg=dict( type='Pretrained', checkpoint='open-mmlab://detectron2/resnet50_caffe'))
model = dict( delete=True, type='SoftTeacher', detector=detector, data_preprocessor=dict( type='MultiBranchDataPreprocessor', data_preprocessor=detector.data_preprocessor), semi_train_cfg=dict( freeze_teacher=True, sup_weight=1.0, unsup_weight=4.0, pseudo_label_initial_score_thr=0.5, rpn_pseudo_thr=0.9, cls_pseudo_thr=0.9, reg_pseudo_thr=0.02, jitter_times=10, jitter_scale=0.06, min_pseudo_bbox_wh=(1e-2, 1e-2)), semi_test_cfg=dict(predict_on='teacher'))
dataset_type = 'CocoDataset' data_root = 'data/coco/' backend_args = None scale=(1333, 800) color_space=None geometric = None batch_size = 4 num_workers= 2
sup_pipeline = [ dict(type='LoadImageFromFile', backend_args=backend_args), dict(type='LoadAnnotations', with_bbox=True), dict(type='RandomResize', scale=scale, keep_ratio=True), dict(type='RandomFlip', prob=0.5), dict(type='RandAugment', aug_space=color_space, aug_num=1), dict(type='FilterAnnotations', min_gt_bbox_wh=(1e-2, 1e-2)), dict(type='MultiBranch', sup=dict(type='PackDetInputs')) ]
weak_pipeline = [ dict(type='RandomResize', scale=scale, keep_ratio=True), dict(type='RandomFlip', prob=0.5), dict( type='PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', 'flip', 'flip_direction', 'homography_matrix')), ]
strong_pipeline = [ dict(type='RandomResize', scale=scale, keep_ratio=True), dict(type='RandomFlip', prob=0.5), dict( type='RandomOrder', transforms=[ dict(type='RandAugment', aug_space=color_space, aug_num=1), dict(type='RandAugment', aug_space=geometric, aug_num=1), ]), dict(type='RandomErasing', n_patches=(1, 5), ratio=(0, 0.2)), dict(type='FilterAnnotations', min_gt_bbox_wh=(1e-2, 1e-2)), dict( type='PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', 'flip', 'flip_direction', 'homography_matrix')), ]
unsup_pipeline = [ dict(type='LoadImageFromFile', backend_args=backend_args), dict(type='LoadEmptyAnnotations'), dict( type='MultiBranch', unsup_teacher=weak_pipeline, unsup_student=strong_pipeline, ) ]
labeled_dataset = dict( type=dataset_type, data_root=data_root, ann_file='annotations/instances_train2017.json', data_prefix=dict(img='train2017/'), filter_cfg=dict(filter_empty_gt=True, min_size=32), pipeline=sup_pipeline)
unlabeled_dataset = dict( type=dataset_type, data_root=data_root, ann_file='annotations/instances_unlabeled2017.json', data_prefix=dict(img='unlabeled2017/'), filter_cfg=dict(filter_empty_gt=False), pipeline=unsup_pipeline)
train_dataloader = dict( batch_size=batch_size, num_workers=num_workers, persistent_workers=True, sampler=dict( type='GroupMultiSourceSampler', batch_size=batch_size, source_ratio=[1, 4]), dataset=dict( type='ConcatDataset', datasets=[labeled_dataset, unlabeled_dataset]))
custom_hooks = [dict(type='MeanTeacherHook')] val_cfg = dict(type='TeacherStudentValLoop')