zcablii / LSKNet

(IJCV2024 & ICCV2023) LSKNet: A Foundation Lightweight Backbone for Remote Sensing
Other
489 stars 40 forks source link

Why I use lsk_s_ema_fpn_1x_dota_le90.py ad config and lsk_s_ema_fpn_1x_dota_le90_20230212-30ed4041.pth as checkpoint to test DOTAv1, the test mAP is only 81.3% rather than 81.85% reported on LSKNet official website? #49

Closed xxxyyynnn closed 8 months ago

xxxyyynnn commented 8 months ago

Prerequisite

Task

I'm using the official example scripts/configs for the officially supported tasks/models/datasets.

Branch

master branch https://github.com/open-mmlab/mmrotate

Environment

sys.platform: linux Python: 3.8.19 (default, Mar 20 2024, 19:58:24) [GCC 11.2.0] CUDA available: True GPU 0: NVIDIA GeForce RTX 4090 GPU 1: NVIDIA GeForce RTX 3090 CUDA_HOME: /usr/local/cuda-11.8 NVCC: Cuda compilation tools, release 11.8, V11.8.89 GCC: gcc (Ubuntu 11.1.0-1ubuntu1~18.04.1) 11.1.0 PyTorch: 2.0.1+cu118 PyTorch compiling details: PyTorch built with:

TorchVision: 0.15.2+cu118 OpenCV: 4.9.0 MMCV: 1.7.2 MMCV Compiler: GCC 9.3 MMCV CUDA Compiler: 11.8 MMRotate: 0.3.4+

Reproduces the problem - code sample

Copyright (c) OpenMMLab. All rights reserved.

import argparse import os import os.path as osp import time import warnings

import mmcv import torch from mmcv import Config, DictAction from mmcv.cnn import fuse_conv_bn from mmcv.parallel import MMDataParallel, MMDistributedDataParallel from mmcv.runner import (get_dist_info, init_dist, load_checkpoint, wrap_fp16_model) from mmdet.apis import multi_gpu_test, single_gpu_test from mmdet.datasets import build_dataloader, replace_ImageToTensor

from mmrotate.datasets import build_dataset from mmrotate.models import build_detector from mmrotate.utils import compat_cfg, setup_multi_processes

def parse_args(): """Parse parameters.""" parser = argparse.ArgumentParser( description='MMDet test (and eval) a model') parser.add_argument('--config', default='/home/xyn02/LSKNet/configs/xynModel/lsk_s_ema_fpn_1x_dota_le90.py', help='test config file path') parser.add_argument('--checkpoint', default='/home/xyn02/LSKNet/configs/xynModel/lsk_s_ema_fpn_1x_dota_le90_20230212-30ed4041.pth', help='checkpoint file') parser.add_argument( '--work-dir', default='/home/xyn02/LSKNet/runs/202403/20240326/lsk_s_ema_fpn_1x_dota_le90_run/work_dir', help='the directory to save the file containing evaluation metrics') parser.add_argument('--out', help='output result file in pickle format') parser.add_argument( '--fuse-conv-bn', action='store_true', help='Whether to fuse conv and bn, this will slightly increase' 'the inference speed') parser.add_argument( '--gpu-ids', type=int, nargs='+', help='ids of gpus to use ' '(only applicable to non-distributed testing)') parser.add_argument( '--format-only', action='store_true', help='Format the output results without perform evaluation. It is' 'useful when you want to format the result to a specific format and ' 'submit it to the test server') parser.add_argument( '--eval', type=str, nargs='+', help='evaluation metrics, which depends on the dataset, e.g., "bbox",' ' "segm", "proposal" for COCO, and "mAP", "recall" for PASCAL VOC') parser.add_argument('--show', action='store_true', help='show results') parser.add_argument( '--show-dir', default='/home/xyn02/LSKNet/runs/202403/20240326/lsk_s_ema_fpn_1x_dota_le90_run/show_dir', help='directory where painted images will be saved') parser.add_argument( '--show-score-thr', type=float, default=0.3, help='score threshold (default: 0.3)') parser.add_argument( '--gpu-collect', action='store_true', help='whether to use gpu to collect results.') parser.add_argument( '--tmpdir', help='tmp directory used for collecting results from multiple ' 'workers, available when gpu-collect is not specified') parser.add_argument( '--cfg-options', nargs='+', action=DictAction, help='override some settings in the used config, the key-value pair ' 'in xxx=yyy format will be merged into config file. If the value to ' 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' 'Note that the quotation marks are necessary and that no white space ' 'is allowed.') parser.add_argument( '--eval-options', nargs='+', action=DictAction, help='custom options for evaluation, the key-value pair in xxx=yyy ' 'format will be kwargs for dataset.evaluate() function') parser.add_argument( '--launcher', choices=['none', 'pytorch', 'slurm', 'mpi'], default='none', help='job launcher') parser.add_argument('--local_rank', type=int, default=0) args = parser.parse_args() if 'LOCAL_RANK' not in os.environ: os.environ['LOCAL_RANK'] = str(args.local_rank)

return args

def main(): args = parse_args()

assert args.out or args.eval or args.format_only or args.show \
    or args.show_dir, \
    ('Please specify at least one operation (save/eval/format/show the '
     'results / save the results) with the argument "--out", "--eval"'
     ', "--format-only", "--show" or "--show-dir"')

if args.eval and args.format_only:
    raise ValueError('--eval and --format_only cannot be both specified')

if args.out is not None and not args.out.endswith(('.pkl', '.pickle')):
    raise ValueError('The output file must be a pkl file.')

cfg = Config.fromfile(args.config)
if args.cfg_options is not None:
    cfg.merge_from_dict(args.cfg_options)

cfg = compat_cfg(cfg)

if args.format_only and cfg.mp_start_method != 'spawn':
    warnings.warn(
        '`mp_start_method` in `cfg` is set to `spawn` to use CUDA '
        'with multiprocessing when formatting output result.')
    cfg.mp_start_method = 'spawn'

setup_multi_processes(cfg)

if cfg.get('cudnn_benchmark', False):
    torch.backends.cudnn.benchmark = True

cfg.model.pretrained = None
if cfg.model.get('neck'):
    if isinstance(cfg.model.neck, list):
        for neck_cfg in cfg.model.neck:
            if neck_cfg.get('rfp_backbone'):
                if neck_cfg.rfp_backbone.get('pretrained'):
                    neck_cfg.rfp_backbone.pretrained = None
    elif cfg.model.neck.get('rfp_backbone'):
        if cfg.model.neck.rfp_backbone.get('pretrained'):
            cfg.model.neck.rfp_backbone.pretrained = None

if args.gpu_ids is not None:
    cfg.gpu_ids = args.gpu_ids
else:
    cfg.gpu_ids = range(1)

if args.launcher == 'none':
    distributed = False
    if len(cfg.gpu_ids) > 1:
        warnings.warn(
            f'We treat {cfg.gpu_ids} as gpu-ids, and reset to '
            f'{cfg.gpu_ids[0:1]} as gpu-ids to avoid potential error in '
            'non-distribute testing time.')
        cfg.gpu_ids = cfg.gpu_ids[0:1]
else:
    distributed = True
    init_dist(args.launcher, **cfg.dist_params)

test_dataloader_default_args = dict(
    samples_per_gpu=1, workers_per_gpu=2, dist=distributed, shuffle=False)

if isinstance(cfg.data.test, dict):
    cfg.data.test.test_mode = True
    if 'samples_per_gpu' in cfg.data.test:
        warnings.warn('`samples_per_gpu` in `test` field of '
                      'data will be deprecated, you should'
                      ' move it to `test_dataloader` field')
        test_dataloader_default_args['samples_per_gpu'] = \
            cfg.data.test.pop('samples_per_gpu')
    if test_dataloader_default_args['samples_per_gpu'] > 1:
        # Replace 'ImageToTensor' to 'DefaultFormatBundle'
        cfg.data.test.pipeline = replace_ImageToTensor(
            cfg.data.test.pipeline)
elif isinstance(cfg.data.test, list):
    for ds_cfg in cfg.data.test:
        ds_cfg.test_mode = True
        if 'samples_per_gpu' in ds_cfg:
            warnings.warn('`samples_per_gpu` in `test` field of '
                          'data will be deprecated, you should'
                          ' move it to `test_dataloader` field')
    samples_per_gpu = max(
        [ds_cfg.pop('samples_per_gpu', 1) for ds_cfg in cfg.data.test])
    test_dataloader_default_args['samples_per_gpu'] = samples_per_gpu
    if samples_per_gpu > 1:
        for ds_cfg in cfg.data.test:
            ds_cfg.pipeline = replace_ImageToTensor(ds_cfg.pipeline)

test_loader_cfg = {
    **test_dataloader_default_args,
    **cfg.data.get('test_dataloader', {})
}

rank, _ = get_dist_info()
if args.work_dir is not None and rank == 0:
    mmcv.mkdir_or_exist(osp.abspath(args.work_dir))
    timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())
    json_file = osp.join(args.work_dir, f'eval_{timestamp}.json')

dataset = build_dataset(cfg.data.test)
data_loader = build_dataloader(dataset, **test_loader_cfg)

cfg.model.train_cfg = None
model = build_detector(cfg.model, test_cfg=cfg.get('test_cfg'))
fp16_cfg = cfg.get('fp16', None)
if fp16_cfg is not None:
    wrap_fp16_model(model)
checkpoint = load_checkpoint(model, args.checkpoint, map_location='cpu')
if args.fuse_conv_bn:
    model = fuse_conv_bn(model)
if 'CLASSES' in checkpoint.get('meta', {}):
    model.CLASSES = checkpoint['meta']['CLASSES']
else:
    model.CLASSES = dataset.CLASSES

if not distributed:
    model = MMDataParallel(model, device_ids=cfg.gpu_ids)
    outputs = single_gpu_test(model, data_loader, args.show, args.show_dir,
                              args.show_score_thr)
else:
    model = MMDistributedDataParallel(
        model.cuda(),
        device_ids=[torch.cuda.current_device()],
        broadcast_buffers=False)
    outputs = multi_gpu_test(model, data_loader, args.tmpdir,
                             args.gpu_collect)

rank, _ = get_dist_info()
if rank == 0:
    if args.out:
        print(f'\nwriting results to {args.out}')
        mmcv.dump(outputs, args.out)
    kwargs = {} if args.eval_options is None else args.eval_options
    if args.format_only:
        dataset.format_results(outputs, **kwargs)
    if args.eval:
        eval_kwargs = cfg.get('evaluation', {}).copy()
        for key in [
                'interval', 'tmpdir', 'start', 'gpu_collect', 'save_best',
                'rule', 'dynamic_intervals'
        ]:
            eval_kwargs.pop(key, None)
        eval_kwargs.update(dict(metric=args.eval, **kwargs))
        metric = dataset.evaluate(outputs, **eval_kwargs)
        print(metric)
        metric_dict = dict(config=args.config, metric=metric)
        if args.work_dir is not None and rank == 0:
            mmcv.dump(metric_dict, json_file)

if name == 'main': main()

Reproduces the problem - command or script

python ./tools/test.py --format-only --eval-options submission_dir=/home/xyn02/LSKNet/runs/202403/20240326/lsk_s_ema_fpn_1x_dota_le90_run/work_dir/Task1_results

Reproduces the problem - error message

image image

Additional information

No response

zcablii commented 8 months ago

Try this ckpt

xxxyyynnn commented 8 months ago

I have used the "lsknet_s_ema_dota8185_epoch_12.pth" checkpoint, and the test mAP on DOTAv1 Task1 is exactly the same as before. image

zcablii commented 8 months ago

Make sure you are conducting multi-scale testing.

xxxyyynnn commented 8 months ago

Yes, I use single-scale testing. But when I tried to multi-scale split DOTAv1, I met an error:

(LSKNet) xyn02@server-R740:~/LSKNet$ python ./tools/test.py --format-only --eval-options submission_dir=/home/xyn02/LSKNet/runs/202403/20240327/lsk_s_ema_fpn_1x_dota_le90_run/work_dir_epoch_12/Task1_results /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/site-packages/mmcv/init.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details. warnings.warn( ./tools/test.py:124: UserWarning: mp_start_method in cfg is set to spawn to use CUDA with multiprocessing when formatting output result. warnings.warn( /home/xyn02/LSKNet/mmrotate/utils/setup_env.py:38: UserWarning: Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. warnings.warn( /home/xyn02/LSKNet/mmrotate/utils/setup_env.py:48: UserWarning: Setting MKL_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. warnings.warn( /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/site-packages/mmdet/models/dense_heads/anchor_head.py:116: UserWarning: DeprecationWarning: num_anchors is deprecated, for consistency or also use num_base_priors instead warnings.warn('DeprecationWarning: num_anchors is deprecated, ' load checkpoint from local path: /home/xyn02/LSKNet/configs/xynModel/lsknet_s_ema_dota8185_epoch_12.pth The model and loaded state dict do not match exactly

unexpected key in source state_dict: ema_backbone_patch_embed1_proj_weight, ema_backbone_patch_embed1_proj_bias, ema_backbone_patch_embed1_norm_weight, ema_backbone_patch_embed1_norm_bias, ema_backbone_patch_embed1_norm_running_mean, ema_backbone_patch_embed1_norm_running_var, ema_backbone_patch_embed1_norm_num_batches_tracked, ema_backbone_block1_0_layer_scale_1, ema_backbone_block1_0_layer_scale_2, ema_backbone_block1_0_norm1_weight, ema_backbone_block1_0_norm1_bias, ema_backbone_block1_0_norm1_running_mean, ema_backbone_block1_0_norm1_running_var, ema_backbone_block1_0_norm1_num_batches_tracked, ema_backbone_block1_0_norm2_weight, ema_backbone_block1_0_norm2_bias, ema_backbone_block1_0_norm2_running_mean, ema_backbone_block1_0_norm2_running_var, ema_backbone_block1_0_norm2_num_batches_tracked, ema_backbone_block1_0_attn_proj_1_weight, ema_backbone_block1_0_attn_proj_1_bias, ema_backbone_block1_0_attn_spatial_gating_unit_conv0_weight, ema_backbone_block1_0_attn_spatial_gating_unit_conv0_bias, ema_backbone_block1_0_attn_spatial_gating_unit_conv_spatial_weight, ema_backbone_block1_0_attn_spatial_gating_unit_conv_spatial_bias, ema_backbone_block1_0_attn_spatial_gating_unit_conv1_weight, ema_backbone_block1_0_attn_spatial_gating_unit_conv1_bias, ema_backbone_block1_0_attn_spatial_gating_unit_conv2_weight, ema_backbone_block1_0_attn_spatial_gating_unit_conv2_bias, ema_backbone_block1_0_attn_spatial_gating_unit_conv_squeeze_weight, ema_backbone_block1_0_attn_spatial_gating_unit_conv_squeeze_bias, ema_backbone_block1_0_attn_spatial_gating_unit_conv_weight, ema_backbone_block1_0_attn_spatial_gating_unit_conv_bias, ema_backbone_block1_0_attn_proj_2_weight, ema_backbone_block1_0_attn_proj_2_bias, ema_backbone_block1_0_mlp_fc1_weight, ema_backbone_block1_0_mlp_fc1_bias, ema_backbone_block1_0_mlp_dwconv_dwconv_weight, ema_backbone_block1_0_mlp_dwconv_dwconv_bias, ema_backbone_block1_0_mlp_fc2_weight, ema_backbone_block1_0_mlp_fc2_bias, ema_backbone_block1_1_layer_scale_1, ema_backbone_block1_1_layer_scale_2, ema_backbone_block1_1_norm1_weight, ema_backbone_block1_1_norm1_bias, ema_backbone_block1_1_norm1_running_mean, ema_backbone_block1_1_norm1_running_var, ema_backbone_block1_1_norm1_num_batches_tracked, ema_backbone_block1_1_norm2_weight, ema_backbone_block1_1_norm2_bias, ema_backbone_block1_1_norm2_running_mean, ema_backbone_block1_1_norm2_running_var, ema_backbone_block1_1_norm2_num_batches_tracked, ema_backbone_block1_1_attn_proj_1_weight, ema_backbone_block1_1_attn_proj_1_bias, ema_backbone_block1_1_attn_spatial_gating_unit_conv0_weight, ema_backbone_block1_1_attn_spatial_gating_unit_conv0_bias, ema_backbone_block1_1_attn_spatial_gating_unit_conv_spatial_weight, ema_backbone_block1_1_attn_spatial_gating_unit_conv_spatial_bias, ema_backbone_block1_1_attn_spatial_gating_unit_conv1_weight, ema_backbone_block1_1_attn_spatial_gating_unit_conv1_bias, ema_backbone_block1_1_attn_spatial_gating_unit_conv2_weight, ema_backbone_block1_1_attn_spatial_gating_unit_conv2_bias, ema_backbone_block1_1_attn_spatial_gating_unit_conv_squeeze_weight, ema_backbone_block1_1_attn_spatial_gating_unit_conv_squeeze_bias, ema_backbone_block1_1_attn_spatial_gating_unit_conv_weight, ema_backbone_block1_1_attn_spatial_gating_unit_conv_bias, ema_backbone_block1_1_attn_proj_2_weight, ema_backbone_block1_1_attn_proj_2_bias, ema_backbone_block1_1_mlp_fc1_weight, ema_backbone_block1_1_mlp_fc1_bias, ema_backbone_block1_1_mlp_dwconv_dwconv_weight, ema_backbone_block1_1_mlp_dwconv_dwconv_bias, ema_backbone_block1_1_mlp_fc2_weight, ema_backbone_block1_1_mlp_fc2_bias, ema_backbone_norm1_weight, ema_backbone_norm1_bias, ema_backbone_patch_embed2_proj_weight, ema_backbone_patch_embed2_proj_bias, ema_backbone_patch_embed2_norm_weight, ema_backbone_patch_embed2_norm_bias, ema_backbone_patch_embed2_norm_running_mean, ema_backbone_patch_embed2_norm_running_var, ema_backbone_patch_embed2_norm_num_batches_tracked, ema_backbone_block2_0_layer_scale_1, ema_backbone_block2_0_layer_scale_2, ema_backbone_block2_0_norm1_weight, ema_backbone_block2_0_norm1_bias, ema_backbone_block2_0_norm1_running_mean, ema_backbone_block2_0_norm1_running_var, ema_backbone_block2_0_norm1_num_batches_tracked, ema_backbone_block2_0_norm2_weight, ema_backbone_block2_0_norm2_bias, ema_backbone_block2_0_norm2_running_mean, ema_backbone_block2_0_norm2_running_var, ema_backbone_block2_0_norm2_num_batches_tracked, ema_backbone_block2_0_attn_proj_1_weight, ema_backbone_block2_0_attn_proj_1_bias, ema_backbone_block2_0_attn_spatial_gating_unit_conv0_weight, ema_backbone_block2_0_attn_spatial_gating_unit_conv0_bias, ema_backbone_block2_0_attn_spatial_gating_unit_conv_spatial_weight, ema_backbone_block2_0_attn_spatial_gating_unit_conv_spatial_bias, ema_backbone_block2_0_attn_spatial_gating_unit_conv1_weight, ema_backbone_block2_0_attn_spatial_gating_unit_conv1_bias, ema_backbone_block2_0_attn_spatial_gating_unit_conv2_weight, ema_backbone_block2_0_attn_spatial_gating_unit_conv2_bias, ema_backbone_block2_0_attn_spatial_gating_unit_conv_squeeze_weight, ema_backbone_block2_0_attn_spatial_gating_unit_conv_squeeze_bias, ema_backbone_block2_0_attn_spatial_gating_unit_conv_weight, ema_backbone_block2_0_attn_spatial_gating_unit_conv_bias, ema_backbone_block2_0_attn_proj_2_weight, ema_backbone_block2_0_attn_proj_2_bias, ema_backbone_block2_0_mlp_fc1_weight, ema_backbone_block2_0_mlp_fc1_bias, ema_backbone_block2_0_mlp_dwconv_dwconv_weight, ema_backbone_block2_0_mlp_dwconv_dwconv_bias, ema_backbone_block2_0_mlp_fc2_weight, ema_backbone_block2_0_mlp_fc2_bias, ema_backbone_block2_1_layer_scale_1, ema_backbone_block2_1_layer_scale_2, ema_backbone_block2_1_norm1_weight, ema_backbone_block2_1_norm1_bias, ema_backbone_block2_1_norm1_running_mean, ema_backbone_block2_1_norm1_running_var, ema_backbone_block2_1_norm1_num_batches_tracked, ema_backbone_block2_1_norm2_weight, ema_backbone_block2_1_norm2_bias, ema_backbone_block2_1_norm2_running_mean, ema_backbone_block2_1_norm2_running_var, ema_backbone_block2_1_norm2_num_batches_tracked, ema_backbone_block2_1_attn_proj_1_weight, ema_backbone_block2_1_attn_proj_1_bias, ema_backbone_block2_1_attn_spatial_gating_unit_conv0_weight, ema_backbone_block2_1_attn_spatial_gating_unit_conv0_bias, ema_backbone_block2_1_attn_spatial_gating_unit_conv_spatial_weight, ema_backbone_block2_1_attn_spatial_gating_unit_conv_spatial_bias, ema_backbone_block2_1_attn_spatial_gating_unit_conv1_weight, ema_backbone_block2_1_attn_spatial_gating_unit_conv1_bias, ema_backbone_block2_1_attn_spatial_gating_unit_conv2_weight, ema_backbone_block2_1_attn_spatial_gating_unit_conv2_bias, ema_backbone_block2_1_attn_spatial_gating_unit_conv_squeeze_weight, ema_backbone_block2_1_attn_spatial_gating_unit_conv_squeeze_bias, ema_backbone_block2_1_attn_spatial_gating_unit_conv_weight, ema_backbone_block2_1_attn_spatial_gating_unit_conv_bias, ema_backbone_block2_1_attn_proj_2_weight, ema_backbone_block2_1_attn_proj_2_bias, ema_backbone_block2_1_mlp_fc1_weight, ema_backbone_block2_1_mlp_fc1_bias, ema_backbone_block2_1_mlp_dwconv_dwconv_weight, ema_backbone_block2_1_mlp_dwconv_dwconv_bias, ema_backbone_block2_1_mlp_fc2_weight, ema_backbone_block2_1_mlp_fc2_bias, ema_backbone_norm2_weight, ema_backbone_norm2_bias, ema_backbone_patch_embed3_proj_weight, ema_backbone_patch_embed3_proj_bias, ema_backbone_patch_embed3_norm_weight, ema_backbone_patch_embed3_norm_bias, ema_backbone_patch_embed3_norm_running_mean, ema_backbone_patch_embed3_norm_running_var, ema_backbone_patch_embed3_norm_num_batches_tracked, ema_backbone_block3_0_layer_scale_1, ema_backbone_block3_0_layer_scale_2, ema_backbone_block3_0_norm1_weight, ema_backbone_block3_0_norm1_bias, ema_backbone_block3_0_norm1_running_mean, ema_backbone_block3_0_norm1_running_var, ema_backbone_block3_0_norm1_num_batches_tracked, ema_backbone_block3_0_norm2_weight, ema_backbone_block3_0_norm2_bias, ema_backbone_block3_0_norm2_running_mean, ema_backbone_block3_0_norm2_running_var, ema_backbone_block3_0_norm2_num_batches_tracked, ema_backbone_block3_0_attn_proj_1_weight, ema_backbone_block3_0_attn_proj_1_bias, ema_backbone_block3_0_attn_spatial_gating_unit_conv0_weight, ema_backbone_block3_0_attn_spatial_gating_unit_conv0_bias, ema_backbone_block3_0_attn_spatial_gating_unit_conv_spatial_weight, ema_backbone_block3_0_attn_spatial_gating_unit_conv_spatial_bias, ema_backbone_block3_0_attn_spatial_gating_unit_conv1_weight, ema_backbone_block3_0_attn_spatial_gating_unit_conv1_bias, ema_backbone_block3_0_attn_spatial_gating_unit_conv2_weight, ema_backbone_block3_0_attn_spatial_gating_unit_conv2_bias, ema_backbone_block3_0_attn_spatial_gating_unit_conv_squeeze_weight, ema_backbone_block3_0_attn_spatial_gating_unit_conv_squeeze_bias, ema_backbone_block3_0_attn_spatial_gating_unit_conv_weight, ema_backbone_block3_0_attn_spatial_gating_unit_conv_bias, ema_backbone_block3_0_attn_proj_2_weight, ema_backbone_block3_0_attn_proj_2_bias, ema_backbone_block3_0_mlp_fc1_weight, ema_backbone_block3_0_mlp_fc1_bias, ema_backbone_block3_0_mlp_dwconv_dwconv_weight, ema_backbone_block3_0_mlp_dwconv_dwconv_bias, ema_backbone_block3_0_mlp_fc2_weight, ema_backbone_block3_0_mlp_fc2_bias, ema_backbone_block3_1_layer_scale_1, ema_backbone_block3_1_layer_scale_2, ema_backbone_block3_1_norm1_weight, ema_backbone_block3_1_norm1_bias, ema_backbone_block3_1_norm1_running_mean, ema_backbone_block3_1_norm1_running_var, ema_backbone_block3_1_norm1_num_batches_tracked, ema_backbone_block3_1_norm2_weight, ema_backbone_block3_1_norm2_bias, ema_backbone_block3_1_norm2_running_mean, ema_backbone_block3_1_norm2_running_var, ema_backbone_block3_1_norm2_num_batches_tracked, ema_backbone_block3_1_attn_proj_1_weight, ema_backbone_block3_1_attn_proj_1_bias, ema_backbone_block3_1_attn_spatial_gating_unit_conv0_weight, ema_backbone_block3_1_attn_spatial_gating_unit_conv0_bias, ema_backbone_block3_1_attn_spatial_gating_unit_conv_spatial_weight, ema_backbone_block3_1_attn_spatial_gating_unit_conv_spatial_bias, ema_backbone_block3_1_attn_spatial_gating_unit_conv1_weight, ema_backbone_block3_1_attn_spatial_gating_unit_conv1_bias, ema_backbone_block3_1_attn_spatial_gating_unit_conv2_weight, ema_backbone_block3_1_attn_spatial_gating_unit_conv2_bias, ema_backbone_block3_1_attn_spatial_gating_unit_conv_squeeze_weight, ema_backbone_block3_1_attn_spatial_gating_unit_conv_squeeze_bias, ema_backbone_block3_1_attn_spatial_gating_unit_conv_weight, ema_backbone_block3_1_attn_spatial_gating_unit_conv_bias, ema_backbone_block3_1_attn_proj_2_weight, ema_backbone_block3_1_attn_proj_2_bias, ema_backbone_block3_1_mlp_fc1_weight, ema_backbone_block3_1_mlp_fc1_bias, ema_backbone_block3_1_mlp_dwconv_dwconv_weight, ema_backbone_block3_1_mlp_dwconv_dwconv_bias, ema_backbone_block3_1_mlp_fc2_weight, ema_backbone_block3_1_mlp_fc2_bias, ema_backbone_block3_2_layer_scale_1, ema_backbone_block3_2_layer_scale_2, ema_backbone_block3_2_norm1_weight, ema_backbone_block3_2_norm1_bias, ema_backbone_block3_2_norm1_running_mean, ema_backbone_block3_2_norm1_running_var, ema_backbone_block3_2_norm1_num_batches_tracked, ema_backbone_block3_2_norm2_weight, ema_backbone_block3_2_norm2_bias, ema_backbone_block3_2_norm2_running_mean, ema_backbone_block3_2_norm2_running_var, ema_backbone_block3_2_norm2_num_batches_tracked, ema_backbone_block3_2_attn_proj_1_weight, ema_backbone_block3_2_attn_proj_1_bias, ema_backbone_block3_2_attn_spatial_gating_unit_conv0_weight, ema_backbone_block3_2_attn_spatial_gating_unit_conv0_bias, ema_backbone_block3_2_attn_spatial_gating_unit_conv_spatial_weight, ema_backbone_block3_2_attn_spatial_gating_unit_conv_spatial_bias, ema_backbone_block3_2_attn_spatial_gating_unit_conv1_weight, ema_backbone_block3_2_attn_spatial_gating_unit_conv1_bias, ema_backbone_block3_2_attn_spatial_gating_unit_conv2_weight, ema_backbone_block3_2_attn_spatial_gating_unit_conv2_bias, ema_backbone_block3_2_attn_spatial_gating_unit_conv_squeeze_weight, ema_backbone_block3_2_attn_spatial_gating_unit_conv_squeeze_bias, ema_backbone_block3_2_attn_spatial_gating_unit_conv_weight, ema_backbone_block3_2_attn_spatial_gating_unit_conv_bias, ema_backbone_block3_2_attn_proj_2_weight, ema_backbone_block3_2_attn_proj_2_bias, ema_backbone_block3_2_mlp_fc1_weight, ema_backbone_block3_2_mlp_fc1_bias, ema_backbone_block3_2_mlp_dwconv_dwconv_weight, ema_backbone_block3_2_mlp_dwconv_dwconv_bias, ema_backbone_block3_2_mlp_fc2_weight, ema_backbone_block3_2_mlp_fc2_bias, ema_backbone_block3_3_layer_scale_1, ema_backbone_block3_3_layer_scale_2, ema_backbone_block3_3_norm1_weight, ema_backbone_block3_3_norm1_bias, ema_backbone_block3_3_norm1_running_mean, ema_backbone_block3_3_norm1_running_var, ema_backbone_block3_3_norm1_num_batches_tracked, ema_backbone_block3_3_norm2_weight, ema_backbone_block3_3_norm2_bias, ema_backbone_block3_3_norm2_running_mean, ema_backbone_block3_3_norm2_running_var, ema_backbone_block3_3_norm2_num_batches_tracked, ema_backbone_block3_3_attn_proj_1_weight, ema_backbone_block3_3_attn_proj_1_bias, ema_backbone_block3_3_attn_spatial_gating_unit_conv0_weight, ema_backbone_block3_3_attn_spatial_gating_unit_conv0_bias, ema_backbone_block3_3_attn_spatial_gating_unit_conv_spatial_weight, ema_backbone_block3_3_attn_spatial_gating_unit_conv_spatial_bias, ema_backbone_block3_3_attn_spatial_gating_unit_conv1_weight, ema_backbone_block3_3_attn_spatial_gating_unit_conv1_bias, ema_backbone_block3_3_attn_spatial_gating_unit_conv2_weight, ema_backbone_block3_3_attn_spatial_gating_unit_conv2_bias, ema_backbone_block3_3_attn_spatial_gating_unit_conv_squeeze_weight, ema_backbone_block3_3_attn_spatial_gating_unit_conv_squeeze_bias, ema_backbone_block3_3_attn_spatial_gating_unit_conv_weight, ema_backbone_block3_3_attn_spatial_gating_unit_conv_bias, ema_backbone_block3_3_attn_proj_2_weight, ema_backbone_block3_3_attn_proj_2_bias, ema_backbone_block3_3_mlp_fc1_weight, ema_backbone_block3_3_mlp_fc1_bias, ema_backbone_block3_3_mlp_dwconv_dwconv_weight, ema_backbone_block3_3_mlp_dwconv_dwconv_bias, ema_backbone_block3_3_mlp_fc2_weight, ema_backbone_block3_3_mlp_fc2_bias, ema_backbone_norm3_weight, ema_backbone_norm3_bias, ema_backbone_patch_embed4_proj_weight, ema_backbone_patch_embed4_proj_bias, ema_backbone_patch_embed4_norm_weight, ema_backbone_patch_embed4_norm_bias, ema_backbone_patch_embed4_norm_running_mean, ema_backbone_patch_embed4_norm_running_var, ema_backbone_patch_embed4_norm_num_batches_tracked, ema_backbone_block4_0_layer_scale_1, ema_backbone_block4_0_layer_scale_2, ema_backbone_block4_0_norm1_weight, ema_backbone_block4_0_norm1_bias, ema_backbone_block4_0_norm1_running_mean, ema_backbone_block4_0_norm1_running_var, ema_backbone_block4_0_norm1_num_batches_tracked, ema_backbone_block4_0_norm2_weight, ema_backbone_block4_0_norm2_bias, ema_backbone_block4_0_norm2_running_mean, ema_backbone_block4_0_norm2_running_var, ema_backbone_block4_0_norm2_num_batches_tracked, ema_backbone_block4_0_attn_proj_1_weight, ema_backbone_block4_0_attn_proj_1_bias, ema_backbone_block4_0_attn_spatial_gating_unit_conv0_weight, ema_backbone_block4_0_attn_spatial_gating_unit_conv0_bias, ema_backbone_block4_0_attn_spatial_gating_unit_conv_spatial_weight, ema_backbone_block4_0_attn_spatial_gating_unit_conv_spatial_bias, ema_backbone_block4_0_attn_spatial_gating_unit_conv1_weight, ema_backbone_block4_0_attn_spatial_gating_unit_conv1_bias, ema_backbone_block4_0_attn_spatial_gating_unit_conv2_weight, ema_backbone_block4_0_attn_spatial_gating_unit_conv2_bias, ema_backbone_block4_0_attn_spatial_gating_unit_conv_squeeze_weight, ema_backbone_block4_0_attn_spatial_gating_unit_conv_squeeze_bias, ema_backbone_block4_0_attn_spatial_gating_unit_conv_weight, ema_backbone_block4_0_attn_spatial_gating_unit_conv_bias, ema_backbone_block4_0_attn_proj_2_weight, ema_backbone_block4_0_attn_proj_2_bias, ema_backbone_block4_0_mlp_fc1_weight, ema_backbone_block4_0_mlp_fc1_bias, ema_backbone_block4_0_mlp_dwconv_dwconv_weight, ema_backbone_block4_0_mlp_dwconv_dwconv_bias, ema_backbone_block4_0_mlp_fc2_weight, ema_backbone_block4_0_mlp_fc2_bias, ema_backbone_block4_1_layer_scale_1, ema_backbone_block4_1_layer_scale_2, ema_backbone_block4_1_norm1_weight, ema_backbone_block4_1_norm1_bias, ema_backbone_block4_1_norm1_running_mean, ema_backbone_block4_1_norm1_running_var, ema_backbone_block4_1_norm1_num_batches_tracked, ema_backbone_block4_1_norm2_weight, ema_backbone_block4_1_norm2_bias, ema_backbone_block4_1_norm2_running_mean, ema_backbone_block4_1_norm2_running_var, ema_backbone_block4_1_norm2_num_batches_tracked, ema_backbone_block4_1_attn_proj_1_weight, ema_backbone_block4_1_attn_proj_1_bias, ema_backbone_block4_1_attn_spatial_gating_unit_conv0_weight, ema_backbone_block4_1_attn_spatial_gating_unit_conv0_bias, ema_backbone_block4_1_attn_spatial_gating_unit_conv_spatial_weight, ema_backbone_block4_1_attn_spatial_gating_unit_conv_spatial_bias, ema_backbone_block4_1_attn_spatial_gating_unit_conv1_weight, ema_backbone_block4_1_attn_spatial_gating_unit_conv1_bias, ema_backbone_block4_1_attn_spatial_gating_unit_conv2_weight, ema_backbone_block4_1_attn_spatial_gating_unit_conv2_bias, ema_backbone_block4_1_attn_spatial_gating_unit_conv_squeeze_weight, ema_backbone_block4_1_attn_spatial_gating_unit_conv_squeeze_bias, ema_backbone_block4_1_attn_spatial_gating_unit_conv_weight, ema_backbone_block4_1_attn_spatial_gating_unit_conv_bias, ema_backbone_block4_1_attn_proj_2_weight, ema_backbone_block4_1_attn_proj_2_bias, ema_backbone_block4_1_mlp_fc1_weight, ema_backbone_block4_1_mlp_fc1_bias, ema_backbone_block4_1_mlp_dwconv_dwconv_weight, ema_backbone_block4_1_mlp_dwconv_dwconv_bias, ema_backbone_block4_1_mlp_fc2_weight, ema_backbone_block4_1_mlp_fc2_bias, ema_backbone_norm4_weight, ema_backbone_norm4_bias, ema_neck_lateral_convs_0_conv_weight, ema_neck_lateral_convs_0_conv_bias, ema_neck_lateral_convs_1_conv_weight, ema_neck_lateral_convs_1_conv_bias, ema_neck_lateral_convs_2_conv_weight, ema_neck_lateral_convs_2_conv_bias, ema_neck_lateral_convs_3_conv_weight, ema_neck_lateral_convs_3_conv_bias, ema_neck_fpn_convs_0_conv_weight, ema_neck_fpn_convs_0_conv_bias, ema_neck_fpn_convs_1_conv_weight, ema_neck_fpn_convs_1_conv_bias, ema_neck_fpn_convs_2_conv_weight, ema_neck_fpn_convs_2_conv_bias, ema_neck_fpn_convs_3_conv_weight, ema_neck_fpn_convs_3_conv_bias, ema_rpn_head_rpn_conv_weight, ema_rpn_head_rpn_conv_bias, ema_rpn_head_rpn_cls_weight, ema_rpn_head_rpn_cls_bias, ema_rpn_head_rpn_reg_weight, ema_rpn_head_rpn_reg_bias, ema_roi_head_bbox_head_fc_cls_weight, ema_roi_head_bbox_head_fc_cls_bias, ema_roi_head_bbox_head_fc_reg_weight, ema_roi_head_bbox_head_fc_reg_bias, ema_roi_head_bbox_head_shared_fcs_0_weight, ema_roi_head_bbox_head_shared_fcs_0_bias, ema_roi_head_bbox_head_shared_fcs_1_weight, ema_roi_head_bbox_head_shared_fcs_1_bias

[ ] 0/71888, elapsed: 0s, ETA:/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/site-packages/mmcv/init.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details. warnings.warn( /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/site-packages/mmcv/init.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details. warnings.warn( /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/site-packages/mmdet/models/dense_heads/anchor_head.py:123: UserWarning: DeprecationWarning: anchor_generator is deprecated, please use "prior_generator" instead warnings.warn('DeprecationWarning: anchor_generator is deprecated, ' [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 71888/71888, 1.8 task/s, elapsed: 40306s, ETA: 0sException ignored in: <Finalize object, dead> Traceback (most recent call last): File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/util.py", line 224, in call res = self._callback(*self._args, self._kwargs) File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup sem_unlink(name) FileNotFoundError: [Errno 2] No such file or directory Exception ignored in: <Finalize object, dead> Traceback (most recent call last): File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/util.py", line 224, in call res = self._callback(*self._args, *self._kwargs) File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup sem_unlink(name) FileNotFoundError: [Errno 2] No such file or directory Exception ignored in: <Finalize object, dead> Traceback (most recent call last): File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/util.py", line 224, in call res = self._callback(self._args, self._kwargs) File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup sem_unlink(name) FileNotFoundError: [Errno 2] No such file or directory Exception ignored in: <Finalize object, dead> Traceback (most recent call last): File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/util.py", line 224, in call res = self._callback(*self._args, self._kwargs) File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup sem_unlink(name) FileNotFoundError: [Errno 2] No such file or directory Exception ignored in: <Finalize object, dead> Traceback (most recent call last): File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/util.py", line 224, in call res = self._callback(*self._args, *self._kwargs) File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup sem_unlink(name) FileNotFoundError: [Errno 2] No such file or directory Exception ignored in: <Finalize object, dead> Traceback (most recent call last): File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/util.py", line 224, in call res = self._callback(self._args, self._kwargs) File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup sem_unlink(name) FileNotFoundError: [Errno 2] No such file or directory Exception ignored in: <Finalize object, dead> Traceback (most recent call last): File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/util.py", line 224, in call res = self._callback(*self._args, self._kwargs) File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup sem_unlink(name) FileNotFoundError: [Errno 2] No such file or directory Exception ignored in: <Finalize object, dead> Traceback (most recent call last): File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/util.py", line 224, in call res = self._callback(*self._args, *self._kwargs) File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup sem_unlink(name) FileNotFoundError: [Errno 2] No such file or directory Exception ignored in: <Finalize object, dead> Traceback (most recent call last): File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/util.py", line 224, in call res = self._callback(self._args, self._kwargs) File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup sem_unlink(name) FileNotFoundError: [Errno 2] No such file or directory Exception ignored in: <Finalize object, dead> Traceback (most recent call last): File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/util.py", line 224, in call res = self._callback(*self._args, self._kwargs) File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup sem_unlink(name) FileNotFoundError: [Errno 2] No such file or directory Exception ignored in: <Finalize object, dead> Traceback (most recent call last): File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/util.py", line 224, in call res = self._callback(*self._args, *self._kwargs) File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup sem_unlink(name) FileNotFoundError: [Errno 2] No such file or directory Exception ignored in: <Finalize object, dead> Traceback (most recent call last): File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/util.py", line 224, in call res = self._callback(self._args, self._kwargs) File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup sem_unlink(name) FileNotFoundError: [Errno 2] No such file or directory Exception ignored in: <Finalize object, dead> Traceback (most recent call last): File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/util.py", line 224, in call res = self._callback(*self._args, *self._kwargs) File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup sem_unlink(name) FileNotFoundError: [Errno 2] No such file or directory Exception ignored in: <Finalize object, dead> Traceback (most recent call last): File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/util.py", line 224, in call res = self._callback(self._args, **self._kwargs) File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup sem_unlink(name) FileNotFoundError: [Errno 2] No such file or directory

Merging patch bboxes into full image!!! Multiple processing [ ] 0/937, elapsed: 0s, ETA:/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/site-packages/mmcv/init.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details. warnings.warn( /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/site-packages/mmcv/init.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details. warnings.warn( /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/site-packages/mmcv/init.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details. warnings.warn( /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/site-packages/mmcv/init.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details. warnings.warn( [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 937/937, 50.0 task/s, elapsed: 19s, ETA: 0s Used time: 32.2 s /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 14 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/mp-wn00xpu2': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/mp-psyu7i70': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/mp-vfl67s62': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/mp-dcle6306': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/mp-buts9_4g': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resourcetracker: '/mp-t28wsgo': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/mp-7f9heo2a': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/mp-svx85h5q': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/mp-obvfxar8': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/mp-_364bfxk': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/mp-ns_604c8': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/mp-gxvllkuj': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/mp-28y9cxwy': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/mp-udrm18bp': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e))

Could you please help me solve this problem?Thanks a lot!

zcablii commented 8 months ago

I'm sorry, I didn't understand your question. Did you have a problem when splitting the data into multi-scale or when testing? Can you please provide a summary of your problem and the errors you have encountered? Based on the large number of terminal outputs you have copy&paste here, it is difficult to determine exactly where the problem lies.

xxxyyynnn commented 8 months ago

Sorry, I have splitted the DOTAv1 in multi-scale, and I want to test using the "lsknet_s_ema_dota8185_epoch_12.pth" checkpoint. Then I run the following command. ########################################### python ./tools/test.py --format-only --eval-options submission_dir=/home/xyn02/LSKNet/runs/202403/20240327/lsk_s_ema_fpn_1x_dota_le90_run/work_dir_epoch_12/Task1_results ###########################################

I have tested more than 13 hours, but I got an error after testing. ########################################### Traceback (most recent call last): File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/util.py", line 224, in call res = self._callback(*self._args, *self._kwargs) File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup sem_unlink(name) FileNotFoundError: [Errno 2] No such file or directory Exception ignored in: <Finalize object, dead> Traceback (most recent call last): File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/util.py", line 224, in call res = self._callback(self._args, **self._kwargs) File "/home/xyn02/anaconda3/envs/LSKNet/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup sem_unlink(name) FileNotFoundError: [Errno 2] No such file or directory Exception ignored in: <Finalize object, dead> ###########################################

xxxyyynnn commented 8 months ago

But I can still get the Task1_results, I can't figure out whether the results are right or wrong.

zcablii commented 8 months ago

The testing command should be python ./tools/test.py configs/SOME_Config.py checkpoints/SOME_CHECKPOINT.pth --format-only --eval-options submission_dir=... seems that you forgot to put config and ckpt paths

Also, do not forget to change the test data path to the multi-scale test set in the config file.

xxxyyynnn commented 8 months ago

I just put these in test.py directly. image

zcablii commented 8 months ago

Which platform are you using? Macos?

xxxyyynnn commented 8 months ago

I use Linux

zcablii commented 8 months ago

I am afraid that is the issue of your environment or your device. I can find some potential solutions on Google, but not sure which is the best match for your situation. I suggest you can find it out by yourself.

xxxyyynnn commented 8 months ago

Thank you for your advice! I'm trying to find it out. And could you please teach me about how to train the weight "lsknet_s_ema_dota8185_epoch_12.pth" ? About the parameters settings or other things need attention. I've trained the original model "lsk_s_ema_fpn_1x_dota_le90.py" for more than 36 hours 110 epochs on RTX3090, but it seems that the mAP during training is still rising with more than 66.5%.

zcablii commented 8 months ago

The default setting is already included in the lsk_s_ema_fpn_1x_dota_le90.py config. We use 36 epochs for EMA training, and never tried other settings. More training epochs may bring continuing performance improvement, however also face the risk of overfitting.

xxxyyynnn commented 8 months ago

Thank you for your reminding! Thanks a lot!