open-mmlab / mmyolo

OpenMMLab YOLO series toolbox and benchmark. Implemented RTMDet, RTMDet-Rotated,YOLOv5, YOLOv6, YOLOv7, YOLOv8,YOLOX, PPYOLOE, etc.
https://mmyolo.readthedocs.io/zh_CN/dev/
GNU General Public License v3.0
2.99k stars 539 forks source link

将 MMSelfSup 中 MoCo v3 自监督训练的 ResNet-50 作为 YOLOv5 的主干网络训练cat数据集遇到的一些问题 #669

Open arkerman opened 1 year ago

arkerman commented 1 year ago

Prerequisite

💬 Describe the reimplementation questions

Hi ! 我使用yolov5s训练cat数据集,参考的是这个教程(https://mmyolo.readthedocs.io/zh_CN/latest/get_started/15_minutes_object_detection.html

我想试一下更换v5的主干网络为mmselfsup中的自监督模型,参考的是这个教程(https://mmyolo.readthedocs.io/zh_CN/latest/recommended_topics/replace_backbone.html

但是我遇到训练的mAP结果并不是很高的现象:

 Epoch(val) [30][28/28]  coco/bbox_mAP: 0.0060  coco/bbox_mAP_50: 0.0150  coco/bbox_mAP_75: 0.0000  coco/bbox_mAP_s
: -1.0000  coco/bbox_mAP_m: -1.0000  coco/bbox_mAP_l: 0.0060

And here is the config i used :

_base_ = './yolov5_s-v61_syncbn_8xb16-300e_coco.py'
# python tools/train.py configs/yolov5/yolov5_s-v61_mocov3_fast_1xb12-100e_cat.py
custom_imports = dict(imports=['mmselfsup.models'], allow_failed_imports=False)
checkpoint_file = 'https://download.openmmlab.com/mmselfsup/1.x/mocov3/mocov3_resnet50_8xb512-amp-coslr-800e_in1k/mocov3_resnet50_8xb512-amp-coslr-800e_in1k_20220927-e043f51a.pth'  # noqa
deepen_factor = _base_.deepen_factor
widen_factor = 1.0
channels = [512, 1024, 2048]

data_root = './data/cat/'
class_name = ('cat', )
num_classes = len(class_name)
metainfo = dict(classes=class_name, palette=[(20, 220, 60)])

anchors = [
    [(68, 69), (154, 91), (143, 162)],  # P3/8
    [(242, 160), (189, 287), (391, 207)],  # P4/16
    [(353, 337), (539, 341), (443, 432)]  # P5/32
]

max_epochs = 100
train_batch_size_per_gpu = 12
train_num_workers = 1

# load_from = 'https://download.openmmlab.com/mmyolo/v0/yolov5/yolov5_s-v61_syncbn_fast_8xb16-300e_coco/yolov5_s-v61_syncbn_fast_8xb16-300e_coco_20220918_084700-86e02187.pth'  # noqa

model = dict(
    backbone=dict(
        _delete_=True,     # 将 _base_ 中关于 backbone 的字段删除
        type='mmselfsup.ResNet',
        depth=50,
        num_stages=4,
        out_indices=(2, 3, 4), # 注意:MMSelfSup 中 ResNet 的 out_indices 比 MMdet 和 MMCls 的要大 1
        frozen_stages=1,
        norm_cfg=dict(type='BN', requires_grad=True),
        norm_eval=True,
        style='pytorch',
        init_cfg=dict(type='Pretrained', checkpoint=checkpoint_file)),
    neck=dict(
        type='YOLOv5PAFPN',
        deepen_factor=deepen_factor,
        widen_factor=widen_factor,
        in_channels=channels, # 注意:ResNet-50 输出的3个通道是 [512, 1024, 2048],和原先的 yolov5-s neck 不匹配,需要更改
        out_channels=channels),
    bbox_head=dict(
        type='YOLOv5Head',
        head_module=dict(
            type='YOLOv5HeadModule',
            in_channels=channels, # head 部分输入通道也要做相应更改
            widen_factor=widen_factor))
)

train_dataloader = dict(
    batch_size=train_batch_size_per_gpu,
    num_workers=train_num_workers,
    dataset=dict(
        data_root=data_root,
        metainfo=metainfo,
        ann_file='annotations/trainval.json',
        data_prefix=dict(img='images/')))

val_dataloader = dict(
    dataset=dict(
        metainfo=metainfo,
        data_root=data_root,
        ann_file='annotations/test.json',
        data_prefix=dict(img='images/')))

test_dataloader = val_dataloader

_base_.optim_wrapper.optimizer.batch_size_per_gpu = train_batch_size_per_gpu

val_evaluator = dict(ann_file=data_root + 'annotations/test.json')
test_evaluator = val_evaluator

default_hooks = dict(
    checkpoint=dict(interval=10, max_keep_ckpts=2, save_best='auto'),
    # The warmup_mim_iter parameter is critical.
    # The default value is 1000 which is not suitable for cat datasets.
    param_scheduler=dict(max_epochs=max_epochs, warmup_mim_iter=10),
    logger=dict(type='LoggerHook', interval=5))
train_cfg = dict(max_epochs=max_epochs, val_interval=10)
# visualizer = dict(vis_backends = [dict(type='LocalVisBackend'), dict(type='WandbVisBackend')]) # noqa

所以为啥这个效果这么差呀?我看moco论文里表明自监督已经能够超过有监督的训练了?还是我配置的不对?求各位请教

Environment


# packages in environment at D:\Anaconda3\envs\openmmlab:
#
# Name                    Version                   Build  Channel
absl-py                   1.4.0                     <pip>
addict                    2.4.0                     <pip>
albumentations            1.3.0                     <pip>
attrs                     22.2.0                    <pip>
blas                      1.0                         mkl
brotlipy                  0.7.0           py310h2bbff1b_1002      
bzip2                     1.0.8                he774522_0
ca-certificates           2023.01.10           haa95532_0
cachetools                5.3.0                     <pip>
certifi                   2022.12.7       py310haa95532_0
cffi                      1.15.1          py310h2bbff1b_3
charset-normalizer        2.0.4              pyhd3eb1b0_0
click                     8.1.3                     <pip>
colorama                  0.4.6                     <pip>
contourpy                 1.0.7                     <pip>
cryptography              39.0.1          py310h21b164f_0
cudatoolkit               11.3.1               h59b6b97_2
cycler                    0.11.0                    <pip>
cython                    0.29.33         py310hd77b12b_0
e2cnn                     0.2.3                     <pip>
einops                    0.6.0                     <pip>
exceptiongroup            1.1.0                     <pip>
filelock                  3.9.0                     <pip>
flit-core                 3.6.0              pyhd3eb1b0_0
fonttools                 4.38.0                    <pip>
freetype                  2.12.1               ha860e81_0
future                    0.18.3                    <pip>
giflib                    5.2.1                h8cc25b3_3
google-auth               2.16.2                    <pip>
google-auth-oauthlib      0.4.6                     <pip>
grpcio                    1.51.3                    <pip>
huggingface-hub           0.12.1                    <pip>
idna                      3.4             py310haa95532_0
imageio                   2.26.0                    <pip>
iniconfig                 2.0.0                     <pip>
intel-openmp              2021.4.0          haa95532_3556
joblib                    1.2.0                     <pip>
jpeg                      9e                   h2bbff1b_1
kiwisolver                1.4.4                     <pip>
lazy_loader               0.1                       <pip>
lerc                      3.0                  hd77b12b_0
libdeflate                1.17                 h2bbff1b_0
libffi                    3.4.2                hd77b12b_6
libpng                    1.6.39               h8cc25b3_0
libprotobuf               3.20.1               h23ce68f_0
libtiff                   4.5.0                h6c2663c_2
libuv                     1.44.2               h2bbff1b_0
libwebp                   1.2.4                hbc33d0d_1
libwebp-base              1.2.4                h2bbff1b_1
lz4-c                     1.9.4                h2bbff1b_0
Markdown                  3.4.1                     <pip>
markdown-it-py            2.2.0                     <pip>
MarkupSafe                2.1.2                     <pip>
matplotlib                3.7.1                     <pip>
mdurl                     0.1.2                     <pip>
memory_profiler           0.58.0             pyhd3eb1b0_0
mkl                       2021.4.0           haa95532_640
mkl-service               2.4.0           py310h2bbff1b_0
mkl_fft                   1.3.1           py310ha0764ea_0
mkl_random                1.2.2           py310h4ed8f06_0
mmcls                     1.0.0rc5                  <pip>
mmcv                      2.0.0rc4                  <pip>
mmdet                     3.0.0rc6                  <pip>
mmengine                  0.6.0                     <pip>
mmrazor                   1.0.0rc2                  <pip>
mmrotate                  1.0.0rc1                  <pip>
mmselfsup                 1.0.0rc6                  <pip>
mmyolo                    0.5.0                     <pip>
model-index               0.1.11                    <pip>
modelindex                0.0.2                     <pip>
mpmath                    1.3.0                     <pip>
networkx                  3.0                       <pip>
numpy                     1.23.5          py310h60c9a35_0
numpy-base                1.23.5          py310h04254f7_0
oauthlib                  3.2.2                     <pip>
opencv-python             4.7.0.72                  <pip>
openmim                   0.3.6                     <pip>
openssl                   1.1.1t               h2bbff1b_0
ordered-set               4.1.0                     <pip>
packaging                 23.0                      <pip>
pandas                    1.5.3                     <pip>
parameterized             0.8.1              pyhd3eb1b0_1
pillow                    9.4.0           py310hd77b12b_0
pip                       22.3.1          py310haa95532_0
pluggy                    1.0.0                     <pip>
prettytable               3.6.0                     <pip>
protobuf                  3.20.1          py310hd77b12b_0
protobuf                  4.22.0                    <pip>
psutil                    5.9.0           py310h2bbff1b_0
pyasn1                    0.4.8                     <pip>
pyasn1-modules            0.2.8                     <pip>
pycocotools               2.0.6                     <pip>
pycparser                 2.21               pyhd3eb1b0_0
Pygments                  2.14.0                    <pip>
pyopenssl                 23.0.0          py310haa95532_0
pyparsing                 3.0.9                     <pip>
pysocks                   1.7.1           py310haa95532_0
pytest                    7.2.2                     <pip>
python                    3.10.9               h966fe2a_1
python-dateutil           2.8.2                     <pip>
pytorch                   1.12.1          py3.10_cuda11.3_cudnn8_0    pytorch
pytorch-mutex             1.0                        cuda    pytorch
pytz                      2022.7.1                  <pip>
PyWavelets                1.4.1                     <pip>
PyYAML                    6.0                       <pip>
qudida                    0.0.4                     <pip>
regex                     2022.10.31                <pip>
requests                  2.28.1          py310haa95532_0
requests-oauthlib         1.3.1                     <pip>
rich                      13.3.2                    <pip>
rsa                       4.9                       <pip>
scikit-image              0.20.0                    <pip>
scikit-learn              1.2.1                     <pip>
scipy                     1.10.1                    <pip>
setuptools                65.6.3          py310haa95532_0
six                       1.16.0             pyhd3eb1b0_1
sqlite                    3.40.1               h2bbff1b_0
sympy                     1.11.1                    <pip>
tabulate                  0.9.0                     <pip>
tensorboard               2.12.0                    <pip>
tensorboard-data-server   0.7.0                     <pip>
tensorboard-plugin-wit    1.8.1                     <pip>
termcolor                 2.2.0                     <pip>
terminaltables            3.1.10                    <pip>
threadpoolctl             3.1.0                     <pip>
tifffile                  2023.2.28                 <pip>
timm                      0.6.12                    <pip>
tk                        8.6.12               h2bbff1b_0
tomli                     2.0.1                     <pip>
torchaudio                0.12.1              py310_cu113    pytorch
torchvision               0.13.1              py310_cu113    pytorch
tqdm                      4.65.0                    <pip>
typing_extensions         4.4.0           py310haa95532_0
tzdata                    2022g                h04d1e81_0
urllib3                   1.26.14         py310haa95532_0
vc                        14.2                 h21ff451_1
vs2015_runtime            14.27.29016          h5e58377_2
wcwidth                   0.2.6                     <pip>
Werkzeug                  2.2.3                     <pip>
wheel                     0.38.4          py310haa95532_0
win_inet_pton             1.1.0           py310haa95532_0
wincertstore              0.2             py310haa95532_2
xz                        5.2.10               h8cc25b3_1
yapf                      0.32.0                    <pip>
zlib                      1.2.13               h8cc25b3_0
zstd                      1.5.2                h19a0ad4_0

Expected results

No response

Additional information

No response

mm-assistant[bot] commented 1 year ago

We recommend using English or English & Chinese for issues so that we could have broader discussion.

hhaAndroid commented 1 year ago

@arkerman My guess is that there are no COCO pre-training weights. The configuration we provide uses COCO pre-training weights, while you only have imagenet pre-training weights

arkerman commented 1 year ago

Hi , dude ! Thanks for your reply ! I really appreciate it .

It seems that it is not about the pre-trained models . Because i made a test to verify it . I just replaced the backbone with Resnet50 (pre-trained model from torchvision) without MOCO . As i know that the pre-trained model from torchvision used Imagenet .

And here is the config i used :


_base_ = './yolov5_s-v61_syncbn_8xb16-300e_coco.py'
# python tools/train.py configs/yolov5/yolov5_s-v61_resnet50_1xb12-100e_cat.py
# python tools/train.py configs/yolov5/yolov5_s-v61_mocov3_fast_1xb12-100e_cat.py --resume
# python tools/test.py configs/yolov5/yolov5_s-v61_mocov3_fast_1xb12-100e_cat.py work_dirs/yolov5_s-v61_mocov3_fast_1xb12-100e_cat/epoch_100.pth --show-dir show_results

deepen_factor = _base_.deepen_factor
widen_factor = 1.0
channels = [512, 1024, 2048]

data_root = './data/cat/'
class_name = ('cat', )
num_classes = len(class_name)
metainfo = dict(classes=class_name, palette=[(20, 220, 60)])

anchors = [
    [(68, 69), (154, 91), (143, 162)],     # P3/8
    [(242, 160), (189, 287), (391, 207)],  # P4/16
    [(353, 337), (539, 341), (443, 432)]   # P5/32
]

max_epochs = 100
train_batch_size_per_gpu = 12
train_num_workers = 1

model = dict(
    backbone=dict(
        _delete_=True,           # 将 _base_ 中关于 backbone 的字段删除
        type='mmdet.ResNet',     # 使用 mmdet 中的 ResNet
        depth=50,
        num_stages=4,
        out_indices=(1, 2, 3),
        frozen_stages=1,
        norm_cfg=dict(type='BN', requires_grad=True),
        norm_eval=True,
        style='pytorch',
        init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
    neck=dict(
        type='YOLOv5PAFPN',
        widen_factor=widen_factor,
        in_channels=channels,   # 注意:ResNet-50 输出的3个通道是 [512, 1024, 2048],和原先的 yolov5-s neck 不匹配,需要更改
        out_channels=channels),
    bbox_head=dict(
        type='YOLOv5Head',
        head_module=dict(
            type='YOLOv5HeadModule',
            in_channels=channels,  # head 部分输入通道也要做相应更改
            widen_factor=widen_factor))
)

train_dataloader = dict(
    batch_size=train_batch_size_per_gpu,
    num_workers=train_num_workers,
    dataset=dict(
        data_root=data_root,
        metainfo=metainfo,
        ann_file='annotations/trainval.json',
        data_prefix=dict(img='images/')))

val_dataloader = dict(
    dataset=dict(
        metainfo=metainfo,
        data_root=data_root,
        ann_file='annotations/test.json',
        data_prefix=dict(img='images/')))

test_dataloader = val_dataloader

_base_.optim_wrapper.optimizer.batch_size_per_gpu = train_batch_size_per_gpu

val_evaluator = dict(ann_file=data_root + 'annotations/test.json')
test_evaluator = val_evaluator

default_hooks = dict(
    checkpoint=dict(interval=10, max_keep_ckpts=2, save_best='auto'),
    # The warmup_mim_iter parameter is critical.
    # The default value is 1000 which is not suitable for cat datasets.
    param_scheduler=dict(max_epochs=max_epochs, warmup_mim_iter=10),
    logger=dict(type='LoggerHook', interval=5))
train_cfg = dict(max_epochs=max_epochs, val_interval=10)
# visualizer = dict(vis_backends = [dict(type='LocalVisBackend'), dict(type='WandbVisBackend')]) # noqa

Finally , i got the mAP by 60 epochs shown below :

03/17 11:38:12 - mmengine - INFO - Epoch(val) [60][28/28]  coco/bbox_mAP: 0.4120  coco/bbox_mAP_50: 0.8290  coco/bbox_mAP_75: 0.2960  coco/bbox_mAP_s
: -1.0000  coco/bbox_mAP_m: -1.0000  coco/bbox_mAP_l: 0.4120

It's normal .

So , i don't know why make the mAP very low when i used MOCO_resnet50 backbone ? Are there any suggestions about this problem to help me fix up ? Any advice will be appreciated !

arkerman commented 1 year ago

@hhaAndroid