wanghao9610 / OV-DINO

Official implementation of OV-DINO: Unified Open-Vocabulary Detection with Language-Aware Selective Fusion
https://wanghao9610.github.io/OV-DINO
Apache License 2.0
243 stars 13 forks source link

自定义数据集微调后进行模型评估时报错 #8

Closed menghan1124 closed 3 months ago

menghan1124 commented 3 months ago

命令: sh scripts/eval.sh projects/ovdino/configs/ovdino_swin_tiny224_bert_base_ft_coco_24ep.py ../wkdrs/ovdino_swin_tiny224_bert_base_ft_custom_24ep/model_0034999.pth evaldirs/

报错信息:

label_enc.weight
[08/05 07:12:42 detectron2]: Run evaluation under eval-only mode
[08/05 07:12:42 detectron2]: Run evaluation without EMA.
W0805 07:12:44.368392 140065675740992 torch/multiprocessing/spawn.py:145] Terminating process 1616430 via signal SIGTERM
W0805 07:12:44.369699 140065675740992 torch/multiprocessing/spawn.py:145] Terminating process 1616431 via signal SIGTERM
W0805 07:12:44.370183 140065675740992 torch/multiprocessing/spawn.py:145] Terminating process 1616432 via signal SIGTERM
W0805 07:12:44.370798 140065675740992 torch/multiprocessing/spawn.py:145] Terminating process 1616433 via signal SIGTERM
W0805 07:12:44.371243 140065675740992 torch/multiprocessing/spawn.py:145] Terminating process 1616434 via signal SIGTERM
W0805 07:12:44.371743 140065675740992 torch/multiprocessing/spawn.py:145] Terminating process 1616435 via signal SIGTERM
Traceback (most recent call last):
  File "/workspace/OV-DINO/ovdino/./tools/train_net.py", line 331, in <module>
    launch(
  File "/workspace/OV-DINO/ovdino/detectron2-717ab9/detectron2/engine/launch.py", line 67, in launch
    mp.spawn(
  File "/root/miniconda3/envs/ov-dino/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 281, in spawn
    return start_processes(fn, args, nprocs, join, daemon, start_method="spawn")
  File "/root/miniconda3/envs/ov-dino/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 237, in start_processes
    while not context.join():
  File "/root/miniconda3/envs/ov-dino/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 188, in join
    raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException: 

-- Process 6 terminated with the following error:
Traceback (most recent call last):
  File "/root/miniconda3/envs/ov-dino/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 75, in _wrap
    fn(i, *args)
  File "/workspace/OV-DINO/ovdino/detectron2-717ab9/detectron2/engine/launch.py", line 126, in _distributed_worker
    main_func(*args)
  File "/workspace/OV-DINO/ovdino/tools/train_net.py", line 320, in main
    print(do_test(cfg, model, eval_only=True))
  File "/workspace/OV-DINO/ovdino/tools/train_net.py", line 165, in do_test
    instantiate(cfg.dataloader.test),
  File "/workspace/OV-DINO/ovdino/detectron2-717ab9/detectron2/config/instantiate.py", line 67, in instantiate
    cfg = {k: instantiate(v) for k, v in cfg.items()}
  File "/workspace/OV-DINO/ovdino/detectron2-717ab9/detectron2/config/instantiate.py", line 67, in <dictcomp>
    cfg = {k: instantiate(v) for k, v in cfg.items()}
  File "/workspace/OV-DINO/ovdino/detectron2-717ab9/detectron2/config/instantiate.py", line 83, in instantiate
    return cls(**cfg)
  File "/workspace/OV-DINO/ovdino/detectron2-717ab9/detectron2/data/build.py", line 241, in get_detection_dataset_dicts
    dataset_dicts = [DatasetCatalog.get(dataset_name) for dataset_name in names]
  File "/workspace/OV-DINO/ovdino/detectron2-717ab9/detectron2/data/build.py", line 241, in <listcomp>
    dataset_dicts = [DatasetCatalog.get(dataset_name) for dataset_name in names]
  File "/workspace/OV-DINO/ovdino/detectron2-717ab9/detectron2/data/catalog.py", line 58, in get
    return f()
  File "/workspace/OV-DINO/ovdino/detrex/data/datasets/coco_ovd.py", line 321, in <lambda>
    lambda: load_coco_json(
  File "/workspace/OV-DINO/ovdino/detrex/data/datasets/coco_ovd.py", line 78, in load_coco_json
    coco_api = COCO(json_file)
  File "/root/miniconda3/envs/ov-dino/lib/python3.9/site-packages/pycocotools/coco.py", line 81, in __init__
    with open(annotation_file, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/workspace/OV-DINO/datas/coco/annotations/instances_val2017.json'

用自定义数据训练时正常。用训练的最优模型参数评估样本数据时报错,请问需要修改数据配置的那一部分?

custom_ovd.py中已经加上coco格式自定义数据集信息

wanghao9610 commented 3 months ago

如果你是参考custom_ovd.py定义的数据集: 首先,你得看这个文件(custom_ovd.py)里面你定义的测试数据集是哪个,看报错似乎你并没有修改? 如果你是参考coco_ovd.py定义的数据集: 你需要在register_custom_ovd.py修改相应的参数(是一个字典,改相应的value就可以)。

menghan1124 commented 3 months ago

写错了,是参考的参考custom_ovd.py定义的数据集,这里面已经修改改过了,下面是custom_ovd.py的代码,dataloader.evaluator我也试过了自带的代码,和自己定义的代码

import itertools

import detectron2.data.transforms as T
from detectron2.config import LazyCall as L
from detectron2.data import ( 
    build_detection_test_loader,
    build_detection_train_loader,
    build_detection_val_loader,
    get_detection_dataset_dicts,
)
from detectron2.evaluation import COCOEvaluator
from detrex.data import DetrDatasetMapper
from detrex.data.datasets import register_coco_ovd_instances
from omegaconf import OmegaConf

dataloader = OmegaConf.create()

# if you follow the coco format, you can use the following code.
# if you want to define it by yourself, you can change it on ovdino/detrex/data/datasets/custom_ovd.py.
register_coco_ovd_instances(
    "custom_train1",  # dataset_name
    {},  # custom_data_info
    "/workspace/OV-DINO/datas/custom/annotations/train.json",  # annotations_jsonfile
    "/workspace/OV-DINO/datas/custom/train",  # image_root
    27,  # number_of_classes, default: 80
    "full",  # template, default: full
)
register_coco_ovd_instances(
    "custom_val1",
    {},
    "/workspace/OV-DINO/datas/custom/annotations/val.json",
    "/workspace/OV-DINO/datas/custom/val",
    27,
    "full",
)
register_coco_ovd_instances(
    "custom_test1",
    {},
    "/workspace/OV-DINO/datas/custom/annotations/test.json",
    "/workspace/OV-DINO/datas/custom/test",
    27,
    "identity",
)

dataloader.train = L(build_detection_train_loader)(
    dataset=L(get_detection_dataset_dicts)(names="custom_train1"),
    mapper=L(DetrDatasetMapper)(
        augmentation=[
            L(T.RandomFlip)(),
            L(T.ResizeShortestEdge)(
                short_edge_length=(
                    480,
                    512,
                    544,
                    576,
                    608,
                    640,
                    672,
                    704,
                    736,
                    768,
                    800,
                ),
                max_size=1333,
                sample_style="choice",
            ),
        ],
        augmentation_with_crop=[
            L(T.RandomFlip)(),
            L(T.ResizeShortestEdge)(
                short_edge_length=(400, 500, 600),
                sample_style="choice",
            ),
            L(T.RandomCrop)(
                crop_type="absolute_range",
                crop_size=(384, 600),
            ),
            L(T.ResizeShortestEdge)(
                short_edge_length=(
                    480,
                    512,
                    544,
                    576,
                    608,
                    640,
                    672,
                    704,
                    736,
                    768,
                    800,
                ),
                max_size=1333,
                sample_style="choice",
            ),
        ],
        is_train=True,
        mask_on=False,
        img_format="RGB",
    ),
    total_batch_size=16,
    num_workers=4,
)

dataloader.test = L(build_detection_test_loader)(
    dataset=L(get_detection_dataset_dicts)(
        names="custom_test1", filter_empty=False
    ),
    mapper=L(DetrDatasetMapper)(
        augmentation=[
            L(T.ResizeShortestEdge)(
                short_edge_length=800,
                max_size=1333,
            ),
        ],
        augmentation_with_crop=None,
        is_train=False,
        mask_on=False,
        img_format="RGB",
    ),
    num_workers=4,
)

# dataloader.evaluator = L(COCOEvaluator)(
#     dataset_name="${..test.dataset.names}",
# )
dataloader.evaluator = L(build_detection_val_loader)(
    dataset=L(get_detection_dataset_dicts)(
        names="custom_val1", filter_empty=False
    ),
    mapper=L(DetrDatasetMapper)(
        augmentation=[
            L(T.ResizeShortestEdge)(
                short_edge_length=800,
                max_size=1333,
            ),
        ],
        augmentation_with_crop=None,
        is_train=False,
        mask_on=False,
        img_format="RGB",
    ),
    num_workers=4,
)
wanghao9610 commented 3 months ago

看上去似乎没有什么问题,你可以debug看看/workspace/OV-DINO/ovdino/detrex/data/datasets/coco_ovd.py", line 321 这个位置传递的参数是否是正确的。

menghan1124 commented 3 months ago

看上去似乎没有什么问题,你可以debug看看/workspace/OV-DINO/ovdino/detrex/data/datasets/coco_ovd.py", line 321 这个位置传递的参数是否是正确的。

目前是用这份数据微调没有问题,微调过程中Evaluate也没问题,用微调后的模型跑scripts/eval.sh脚本出现的报错

wanghao9610 commented 3 months ago

配置文件错了,应该用这个:

sh scripts/eval.sh projects/ovdino/configs/ovdino_swin_tiny224_bert_base_ft_custom_24ep.py ../wkdrs/ovdino_swin_tiny224_bert_base_ft_custom_24ep/model_0034999.pth evaldirs
menghan1124 commented 3 months ago

sh scripts/eval.sh projects/ovdino/configs/ovdino_swin_tiny224_bert_base_ft_coco_24ep.py ../wkdrs/ovdino_swin_tiny224_bert_base_ft_custom_24ep/model_0034999.pth evaldirs/

配置文件错了,应该用这个:

sh scripts/eval.sh projects/ovdino/configs/ovdino_swin_tiny224_bert_base_ft_custom_24ep.py ../wkdrs/ovdino_swin_tiny224_bert_base_ft_custom_24ep/model_0034999.pth evaldirs

好的感谢,已经跑通

otakue commented 3 months ago

请问微调后会出现遗忘性问题吗?

wanghao9610 commented 3 months ago

@otakue 微调能提升在相应数据集上的性能,但相应的泛化(zero-shot)性能会降低。