open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.67k stars 9.48k forks source link

KeyError: 'mask' --- gt_mask = instance['mask'] #10961

Open zeynepiskndr opened 1 year ago

zeynepiskndr commented 1 year ago

Checklist

  1. I have searched for related issue [1024](https://github.com/open-mmlab/mmdetection/issues/1024) but cannot get the expected help.
  2. I have read the FAQ documentation but cannot get the expected help.
  3. The bug has not been fixed in the latest version.

Describe the bug

KeyError: Caught KeyError in DataLoader worker process 0.
  File "/home/jupyter/newspaper_train/mmdetection/mmdet/datasets/transforms/loading.py", line 348, in _process_masks
    gt_mask = instance['mask']
KeyError: 'mask'

Reproduction

  1. What command or script did you run?
from mmengine.runner import Runner

# build the runner from config
runner = Runner.from_cfg(cfg)
# start training
runner.train()
  1. Did you make any modifications on the code or config? Did you understand what you have modified? I haven't made any changes to the code, but I have made any changes to the configuration. The parts I changed are the classification information of the datset and the paths of the dataset.
  2. What dataset did you use? A completely personal dataset, a dataset suitable for COCO format.

Environment

  1. Please run python mmdet/utils/collect_env.py to collect necessary environment information and paste it here. /opt/conda/lib/python3.10/site-packages/requests/init.py:109: RequestsDependencyWarning: urllib3 (2.0.4) or chardet (None)/charset_normalizer (3.2.0) doesn't match a supported version! warnings.warn( sys.platform: linux Python: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0] CUDA available: False numpy_random_seed: 2147483648 GCC: gcc (Debian 10.2.1-6) 10.2.1 20210110 PyTorch: 1.13.1+cu117 PyTorch compiling details: PyTorch built with:
    • GCC 9.3
    • C++ Version: 201402
    • Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
    • Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
    • OpenMP 201511 (a.k.a. OpenMP 4.5)
    • LAPACK is enabled (usually provided by MKL)
    • NNPACK is enabled
    • CPU capability usage: AVX2
    • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.13.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,

TorchVision: 0.14.1+cu117 OpenCV: 4.8.0 MMEngine: 0.8.4 MMDetection: 3.1.0+f78af77

  1. You may add addition that may be helpful for locating the problem, such as
    • I am using Google Cloud Server Vertex AI and Pytorch 1.13 is used

Error traceback If applicable, paste the error trackback here.


KeyError: Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
    data = fetcher.fetch(index)
  File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 58, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/opt/conda/lib/python3.10/site-packages/mmengine/dataset/base_dataset.py", line 408, in __getitem__
    data = self.prepare_data(idx)
  File "/opt/conda/lib/python3.10/site-packages/mmengine/dataset/base_dataset.py", line 790, in prepare_data
    return self.pipeline(data_info)
  File "/opt/conda/lib/python3.10/site-packages/mmengine/dataset/base_dataset.py", line 58, in __call__
    data = t(data)
  File "/opt/conda/lib/python3.10/site-packages/mmcv/transforms/base.py", line 12, in __call__
    return self.transform(results)
  File "/home/jupyter/newspaper_train/mmdetection/mmdet/datasets/transforms/loading.py", line 447, in transform
    self._load_masks(results)
  File "/home/jupyter/newspaper_train/mmdetection/mmdet/datasets/transforms/loading.py", line 386, in _load_masks
    gt_masks = self._process_masks(results)
  File "/home/jupyter/newspaper_train/mmdetection/mmdet/datasets/transforms/loading.py", line 348, in _process_masks
    gt_mask = instance['mask']
KeyError: 'mask'
Fhj-id commented 1 year ago

Have you fixed it?

zeynepiskndr commented 1 year ago

Unfortunately no!

zeynepiskndr commented 1 year ago

problem solved

Fhj-id commented 1 year ago

problem solved

how to solve it?

zeynepiskndr commented 1 year ago

I noticed that the format in the annotation file is not appropriate. The segmentation section was empty and I ran some code. This piece of code worked for me.

import json
from pprint import pprint
def convert_bbox_to_polygon(bbox):
    x = bbox[0]
    y = bbox[1]
    w = bbox[2]
    h = bbox[3]
    polygon = [x,y,(x+w),y,(x+w),(y+h),x,(y+h)]
    return([polygon])
def main():
    file_path = "annotation_file_name.json"
    f = open(file_path)
    data = json.load(f)
    for line in data["annotations"]:
        segmentation = convert_bbox_to_polygon(line["bbox"])
        line["segmentation"] = segmentation
    with open("annotation_file_name_edit.json", 'w') as f:
        f.write(json.dumps(data))
    print('DONE')
main()
Fhj-id commented 1 year ago

I noticed that the format in the annotation file is not appropriate. The segmentation section was empty and I ran some code. This piece of code worked for me.

import json
from pprint import pprint
def convert_bbox_to_polygon(bbox):
    x = bbox[0]
    y = bbox[1]
    w = bbox[2]
    h = bbox[3]
    polygon = [x,y,(x+w),y,(x+w),(y+h),x,(y+h)]
    return([polygon])
def main():
    file_path = "annotation_file_name.json"
    f = open(file_path)
    data = json.load(f)
    for line in data["annotations"]:
        segmentation = convert_bbox_to_polygon(line["bbox"])
        line["segmentation"] = segmentation
    with open("annotation_file_name_edit.json", 'w') as f:
        f.write(json.dumps(data))
    print('DONE')
main()

Thanks!

wang1528186571 commented 1 year ago

我注意到注释文件中的格式不合适。分段部分是空的,我运行了一些代码。 这段代码对我有用。

import json
from pprint import pprint
def convert_bbox_to_polygon(bbox):
    x = bbox[0]
    y = bbox[1]
    w = bbox[2]
    h = bbox[3]
    polygon = [x,y,(x+w),y,(x+w),(y+h),x,(y+h)]
    return([polygon])
def main():
    file_path = "annotation_file_name.json"
    f = open(file_path)
    data = json.load(f)
    for line in data["annotations"]:
        segmentation = convert_bbox_to_polygon(line["bbox"])
        line["segmentation"] = segmentation
    with open("annotation_file_name_edit.json", 'w') as f:
        f.write(json.dumps(data))
    print('DONE')
main()

I will report the same error in Mask RCNN. I am using a dataset similar to VOC. Can you provide a solution?

MaksymAndreiev commented 1 month ago

I noticed that the format in the annotation file is not appropriate. The segmentation section was empty and I ran some code. This piece of code worked for me.

import json
from pprint import pprint
def convert_bbox_to_polygon(bbox):
    x = bbox[0]
    y = bbox[1]
    w = bbox[2]
    h = bbox[3]
    polygon = [x,y,(x+w),y,(x+w),(y+h),x,(y+h)]
    return([polygon])
def main():
    file_path = "annotation_file_name.json"
    f = open(file_path)
    data = json.load(f)
    for line in data["annotations"]:
        segmentation = convert_bbox_to_polygon(line["bbox"])
        line["segmentation"] = segmentation
    with open("annotation_file_name_edit.json", 'w') as f:
        f.write(json.dumps(data))
    print('DONE')
main()

Thank you a lot!