open-mmlab / mmyolo

OpenMMLab YOLO series toolbox and benchmark. Implemented RTMDet, RTMDet-Rotated,YOLOv5, YOLOv6, YOLOv7, YOLOv8,YOLOX, PPYOLOE, etc.
https://mmyolo.readthedocs.io/zh_CN/dev/
GNU General Public License v3.0
2.97k stars 537 forks source link

Custom Install: yolov5_collate() got an unexpected keyword argument '_scope_' #886

Closed thiagoribeirodamotta closed 1 year ago

thiagoribeirodamotta commented 1 year ago

Prerequisite

šŸž Describe the bug

When running the script mmyolo.tools.train.py on a custom config file, the following error pops up:

Traceback (most recent call last): File "/project/src/mmyolo/tools/train.py", line 123, in <module> main() File "/project/src/mmyolo/tools/train.py", line 119, in main runner.train() File "/usr/local/lib/python3.10/dist-packages/mmengine/runner/runner.py", line 1745, in train model = self.train_loop.run() # type: ignore File "/usr/local/lib/python3.10/dist-packages/mmengine/runner/loops.py", line 96, in run self.run_epoch() File "/usr/local/lib/python3.10/dist-packages/mmengine/runner/loops.py", line 111, in run_epoch for idx, data_batch in enumerate(self.dataloader): File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 633, in __next__ data = self._next_data() File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 1348, in _next_data return self._process_data(data) File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 1374, in _process_data data.reraise() File "/usr/local/lib/python3.10/dist-packages/torch/_utils.py", line 697, in reraise raise exception TypeError: Caught TypeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch return self.collate_fn(data) TypeError: yolov5_collate() got an unexpected keyword argument '_scope_'

Environment

Made a custom installation of mmyolo using a docker file with an image from NVIDIA (23.08-py3), which uses pytorch version=2.1.0a0+29c30b1.

Since the Pytorch version used on the nvidia container is greater than 2.0.0, there is a bug with the latest version of opencv-python, which had to be down versioned to 4.8.0.74.

Also because of the Pytorch version, I had to git clone MMCV instead of using mim to install it, since the fix to c++17 compiler only happened about 2 weeks ago, where the latest mim version of MMCV was released a few months ago.

The directory structure is currently as follows:

Installation was done with the following Dockerfile:

FROM nvcr.io/nvidia/pytorch:23.08-py3

ARG DEBIAN_FRONTEND=noninteractive
ENV TZ=America/Sao_Paulo

COPY . /project

WORKDIR /project
RUN apt-get update -y && apt-get install -y \
    python3 \
    python-is-python3 \
    python3-pip \
    python3-distutils && \
    rm -rf /var/lib/apt/lists/*

RUN python3 -m pip install --upgrade pip

RUN python3 -m pip install -U openmim && \
    python3 -m pip install opencv-python==4.8.0.74 && \
    mim install "mmengine>=0.6.0"

RUN mkdir -p 3rd_party

RUN cd 3rd_party && \
    git clone --depth 1 https://github.com/open-mmlab/mmcv.git && \
    cd mmcv && \ 
    mim install .

RUN mim install mmdet

RUN mim install "mmyolo"

Additional information

Since MMYolo was installed as a 3rd party tool with the command mim install "mmyolo", I manually copied the tools/train.py script to a custom folder (no modifications made here).

Besides that, the following config file is being used (mainly changed the paths and dataset configs; dataset is constituted of a single class, but there are different files for train and validation; also changed base to use yolov7 image as follows base = 'mmyolo::yolov7/yolov7_l_syncbn_fast_8x16b-300e_coco.py' ):

#_base_ = 'yolov7_tiny_syncbn_fast_8x16b-300e_coco.py'
_base_ = 'mmyolo::yolov7/yolov7_l_syncbn_fast_8x16b-300e_coco.py'

data_root = './data/cvat_task141_yolo_format_only_Classe1_splits/'
class_name = ('Classe1', )
num_classes = len(class_name)
metainfo = dict(classes=class_name, palette=[(20, 220, 60)])
img_scale = (1920, 1080)  # width, height

train_ann_file = 'annotations2/train2.json'
train_data_prefix = 'images/all/'  # Prefix of train image path
# Path of val annotation file
val_ann_file = 'annotations2/val2.json'
val_data_prefix = 'images/all/'  # Prefix of val image path

anchors = [
    [(12, 16), (19, 36), (40, 28)],  # P3/8
    [(36, 75), (76, 55), (72, 146)],  # P4/16
    [(142, 110), (192, 243), (459, 401)]  # P5/32
]

base_lr = 0.01
max_epochs = 200
train_batch_size_per_gpu = 16
train_num_workers = 8

num_epoch_stage2 = 30  # The last 30 epochs switch evaluation interval
val_interval_stage2 = 1  # Evaluation interval
save_epoch_intervals = 1  # Save model checkpoint and validation intervals

# -----model related-----
strides = [8, 16, 32]  # Strides of multi-scale prior box
num_det_layers = 3  # The number of model output scales
norm_cfg = dict(type='BN', momentum=0.03, eps=0.001)

# Data augmentation
max_translate_ratio = 0.2  # YOLOv5RandomAffine
scaling_ratio_range = (0.1, 2.0)  # YOLOv5RandomAffine
mixup_prob = 0.15  # YOLOv5MixUp
randchoice_mosaic_prob = [0.8, 0.2]
mixup_alpha = 8.0  # YOLOv5MixUp
mixup_beta = 8.0  # YOLOv5MixUp

# -----train val related-----
loss_cls_weight = 0.3
loss_bbox_weight = 0.05
loss_obj_weight = 0.7

model_test_cfg = dict(
    # The config of multi-label for multi-class prediction.
    multi_label=True,
    # The number of boxes before NMS.
    nms_pre=30000,
    score_thr=0.001,  # Threshold to filter out boxes.
    nms=dict(type='nms', iou_threshold=0.65),  # NMS type and threshold
    max_per_img=300)  # Max number of detections of each image

# BatchYOLOv7Assigner params
simota_candidate_topk = 10
simota_iou_weight = 3.0
simota_cls_weight = 1.0
prior_match_thr = 4.  # Priori box matching threshold
obj_level_weights = [4., 1.,
                     0.4]  # The obj loss weights of the three output layers

lr_factor = 0.1  # Learning rate scaling factor
weight_decay = 0.0005
save_epoch_intervals = 1  # Save model checkpoint and validation intervals
max_keep_ckpts = 3  # The maximum checkpoints to keep.

# load_from = 'https://download.openmmlab.com/mmyolo/v0/yolov7/yolov7_tiny_syncbn_fast_8x16b-300e_coco/yolov7_tiny_syncbn_fast_8x16b-300e_coco_20221126_102719-0ee5bbdf.pth'  # noqa
load_from = 'https://download.openmmlab.com/mmyolo/v0/yolov7/yolov7_l_syncbn_fast_8x16b-300e_coco/yolov7_l_syncbn_fast_8x16b-300e_coco_20221123_023601-8113c0eb.pth'  # noqa

# model = dict(
#     backbone=dict(frozen_stages=4),
#     bbox_head=dict(
#         head_module=dict(num_classes=num_classes),
#         prior_generator=dict(base_sizes=anchors)))

# ===============================Unmodified in most cases====================
model = dict(
    type='YOLODetector',
    data_preprocessor=dict(
        type='YOLOv5DetDataPreprocessor',
        mean=[0., 0., 0.],
        std=[255., 255., 255.],
        bgr_to_rgb=True),
    backbone=dict(
        type='YOLOv7Backbone',
        arch='L',
        norm_cfg=norm_cfg,
        act_cfg=dict(type='SiLU', inplace=True)),
    neck=dict(
        type='YOLOv7PAFPN',
        block_cfg=dict(
            type='ELANBlock',
            middle_ratio=0.5,
            block_ratio=0.25,
            num_blocks=4,
            num_convs_in_block=1),
        upsample_feats_cat_first=False,
        in_channels=[512, 1024, 1024],
        # The real output channel will be multiplied by 2
        out_channels=[128, 256, 512],
        norm_cfg=norm_cfg,
        act_cfg=dict(type='SiLU', inplace=True)),
    bbox_head=dict(
        type='YOLOv7Head',
        head_module=dict(
            type='YOLOv7HeadModule',
            num_classes=num_classes,
            in_channels=[256, 512, 1024],
            featmap_strides=strides,
            num_base_priors=3),
        prior_generator=dict(
            type='mmdet.YOLOAnchorGenerator',
            base_sizes=anchors,
            strides=strides),
        # scaled based on number of detection layers
        loss_cls=dict(
            type='mmdet.CrossEntropyLoss',
            use_sigmoid=True,
            reduction='mean',
            loss_weight=loss_cls_weight *
            (num_classes / 80 * 3 / num_det_layers)),
        loss_bbox=dict(
            type='IoULoss',
            iou_mode='ciou',
            bbox_format='xywh',
            reduction='mean',
            loss_weight=loss_bbox_weight * (3 / num_det_layers),
            return_iou=True),
        loss_obj=dict(
            type='mmdet.CrossEntropyLoss',
            use_sigmoid=True,
            reduction='mean',
            loss_weight=loss_obj_weight *
            ((img_scale[0] / 640)**2 * 3 / num_det_layers)),
        prior_match_thr=prior_match_thr,
        obj_level_weights=obj_level_weights,
        # BatchYOLOv7Assigner params
        simota_candidate_topk=simota_candidate_topk,
        simota_iou_weight=simota_iou_weight,
        simota_cls_weight=simota_cls_weight),
    test_cfg=model_test_cfg)

pre_transform = [
    dict(type='LoadImageFromFile', backend_args=_base_.backend_args),
    dict(type='LoadAnnotations', with_bbox=True)
]

mosiac4_pipeline = [
    dict(
        type='Mosaic',
        img_scale=img_scale,
        pad_val=114.0,
        pre_transform=pre_transform),
    dict(
        type='YOLOv5RandomAffine',
        max_rotate_degree=0.0,
        max_shear_degree=0.0,
        max_translate_ratio=max_translate_ratio,  # note
        scaling_ratio_range=scaling_ratio_range,  # note
        # img_scale is (width, height)
        border=(-img_scale[0] // 2, -img_scale[1] // 2),
        border_val=(114, 114, 114)),
]

mosiac9_pipeline = [
    dict(
        type='Mosaic9',
        img_scale=img_scale,
        pad_val=114.0,
        pre_transform=pre_transform),
    dict(
        type='YOLOv5RandomAffine',
        max_rotate_degree=0.0,
        max_shear_degree=0.0,
        max_translate_ratio=max_translate_ratio,  # note
        scaling_ratio_range=scaling_ratio_range,  # note
        # img_scale is (width, height)
        border=(-img_scale[0] // 2, -img_scale[1] // 2),
        border_val=(114, 114, 114)),
]

randchoice_mosaic_pipeline = dict(
    type='RandomChoice',
    transforms=[mosiac4_pipeline, mosiac9_pipeline],
    prob=randchoice_mosaic_prob)

train_pipeline = [
    *pre_transform,
    randchoice_mosaic_pipeline,
    dict(
        type='YOLOv5MixUp',
        alpha=mixup_alpha,  # note
        beta=mixup_beta,  # note
        prob=mixup_prob,
        pre_transform=[*pre_transform, randchoice_mosaic_pipeline]),
    dict(type='YOLOv5HSVRandomAug'),
    dict(type='mmdet.RandomFlip', prob=0.5),
    dict(
        type='mmdet.PackDetInputs',
        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip',
                   'flip_direction'))
]

train_dataloader = dict(
    batch_size=train_batch_size_per_gpu,
    num_workers=train_num_workers,
    dataset=dict(
        data_root=data_root,
        metainfo=metainfo,
        ann_file=train_ann_file,
        data_prefix=dict(img=train_data_prefix),
        pipeline=train_pipeline))

test_pipeline = [
    dict(type='LoadImageFromFile', backend_args=_base_.backend_args),
    dict(type='YOLOv5KeepRatioResize', scale=img_scale),
    dict(
        type='LetterResize',
        scale=img_scale,
        allow_scale_up=False,
        pad_val=dict(img=114)),
    dict(type='LoadAnnotations', with_bbox=True, _scope_='mmdet'),
    dict(
        type='mmdet.PackDetInputs',
        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
                   'scale_factor', 'pad_param'))
]

val_dataloader = dict(
    dataset=dict(
        metainfo=metainfo,
        data_root=data_root,
        ann_file=val_ann_file,
        data_prefix=dict(img=val_data_prefix),
        filter_cfg=dict(filter_empty_gt=False, min_size=32),
        pipeline=test_pipeline))

test_dataloader = val_dataloader

_base_.optim_wrapper.optimizer.batch_size_per_gpu = train_batch_size_per_gpu

val_evaluator = dict(ann_file=data_root + val_ann_file)
test_evaluator = val_evaluator

# default_hooks = dict(
#     checkpoint=dict(interval=10, max_keep_ckpts=2, save_best='auto'),
#     # The warmup_mim_iter parameter is critical.
#     # The default value is 1000 which is not suitable for cat datasets.
#     param_scheduler=dict(max_epochs=max_epochs, warmup_mim_iter=10),
#     logger=dict(type='LoggerHook', interval=5))
# train_cfg = dict(max_epochs=max_epochs, val_interval=10)
# # visualizer = dict(vis_backends = [dict(type='LocalVisBackend'), dict(type='WandbVisBackend')]) # noqa

param_scheduler = None
optim_wrapper = dict(
    type='OptimWrapper',
    optimizer=dict(
        type='SGD',
        lr=base_lr,
        momentum=0.937,
        weight_decay=weight_decay,
        nesterov=True,
        batch_size_per_gpu=train_batch_size_per_gpu),
    constructor='YOLOv7OptimWrapperConstructor')

default_hooks = dict(
    param_scheduler=dict(
        type='YOLOv5ParamSchedulerHook',
        scheduler_type='cosine',
        lr_factor=lr_factor,  # note
        max_epochs=max_epochs),
    checkpoint=dict(
        type='CheckpointHook',
        save_param_scheduler=False,
        interval=save_epoch_intervals,
        save_best='auto',
        max_keep_ckpts=max_keep_ckpts))

custom_hooks = [
    dict(
        type='EMAHook',
        ema_type='ExpMomentumEMA',
        momentum=0.0001,
        update_buffers=True,
        strict_load=False,
        priority=49)
]

val_evaluator = dict(
    type='mmdet.CocoMetric',
    proposal_nums=(100, 1, 10),  # Can be accelerated
    ann_file=data_root + val_ann_file,
    metric='bbox')
test_evaluator = val_evaluator

train_cfg = dict(
    type='EpochBasedTrainLoop',
    max_epochs=max_epochs,
    val_interval=save_epoch_intervals,
    dynamic_intervals=[(max_epochs - num_epoch_stage2, val_interval_stage2)])
val_cfg = dict(type='ValLoop')
test_cfg = dict(type='TestLoop')
thiagoribeirodamotta commented 1 year ago

After reading the links below, it seems my errors were because the dataloaders at my config file were lacking a Sampler. Adding sampler=dict(_delete_=True, type='DefaultSampler', shuffle=True) to both train_dataloader, val_dataloader and test_dataloader and collate_fn=dict(_delete_=True, type='yolov5_collate') to train_dataloader, the aforementioned error disappeared.

Link 1: https://github.com/FishAndWasabi/YOLO-MS/issues/4 Link 2: https://github.com/FishAndWasabi/YOLO-MS/issues/8