PaddlePaddle / PaddleDetection

Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection.
Apache License 2.0
12.43k stars 2.85k forks source link

paddleX划分的基于rle格式的COCO数据集在paddledetection训练时报错"not found any coco record" #7195

Closed kaixin-bai closed 1 year ago

kaixin-bai commented 1 year ago

问题确认 Search before asking

Bug组件 Bug Component

Training, DataProcess

Bug描述 Describe the Bug

使用paddleX进行数据集划分:

paddlex --split_dataset --format COCO --dataset_dir ./texture_dataset/ --val_value 0.2 --test_value 0.1

打印信息为:

2022-10-26 11:38:46,751-WARNING: type object 'QuantizationTransformPass' has no attribute '_supported_quantizable_op_type'
2022-10-26 11:38:46,751-WARNING: If you want to use training-aware and post-training quantization, please use Paddle >= 1.8.4 or develop version
2022-10-26 11:38:48 [INFO]  Dataset split starts...
loading annotations into memory...
Done (t=6.63s)
creating index...
index created!
2022-10-26 11:38:55 [INFO]  Dataset split done.
2022-10-26 11:38:55 [INFO]  Train samples: 700
2022-10-26 11:38:55 [INFO]  Eval samples: 200
2022-10-26 11:38:55 [INFO]  Test samples: 100
2022-10-26 11:38:55 [INFO]  Split files saved in ./texture_dataset/

数据集中的实例分割以rle编码,使用COCO可视化结果如下,显示标注文件无错误: image

在使用paddledetection训练目标检测任务时报错:

python3 tools/train.py -c ./configs/yolov3/yolov3_darknet53_270e_obj1texture_coco.yml --use_vdl=true --vdl_log_dir=vdl_dir/scalar

loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Traceback (most recent call last):
  File "tools/train.py", line 172, in <module>
    main()
  File "tools/train.py", line 168, in main
    run(FLAGS, cfg)
  File "tools/train.py", line 123, in run
    trainer = Trainer(cfg, mode='train')
  File "/data-r10/kb/Projects/SynDataGen/PaddleDetection/ppdet/engine/trainer.py", line 95, in __init__
    self.dataset, cfg.worker_num)
  File "/data-r10/kb/Projects/SynDataGen/PaddleDetection/ppdet/data/reader.py", line 163, in __call__
    self.dataset.parse_dataset()
  File "/data-r10/kb/Projects/SynDataGen/PaddleDetection/ppdet/data/source/coco.py", line 225, in parse_dataset
    assert ct > 0, 'not found any coco record in %s' % (anno_path)
AssertionError: not found any coco record in ../datasets/obj1/texture_dataset/train.json

复现环境 Environment

Bug描述确认 Bug description confirmation

是否愿意提交PR? Are you willing to submit a PR?

kaixin-bai commented 1 year ago

通过paddlex划分出来的json文件中,annotations字典为空,仅对图片进行了划分,即划分好的json文件并不符合COCO原始数据集的格式,疑问:paddlex的COCO数据集划分方式是否不支持paddledetection的训练?

wangxinxin08 commented 1 year ago

不包含annotations的json文件应该是测试集,你如果没有验证集的话,就训练集和验证集都写训练集的路径

kaixin-bai commented 1 year ago

不包含annotations的json文件应该是测试集,你如果没有验证集的话,就训练集和验证集都写训练集的路径

annotations.json是我准备的符合COCO数据集格式的标注文件,paddlex生成的train.json, val.json, test.json都不是符合COCO数据集要求的,都是仅对images最了切分,这种情况在paddledetection里该如何设置config文件?还是说paddlex切分的coco数据集的这种方式不兼容paddledetection?

wangxinxin08 commented 1 year ago

如果都没有标注,应该是你转换数据的时候出了问题,可以检查下,比如标注的路径写错了

wangxinxin08 commented 1 year ago

coco格式只有一种,所以只要是coco格式就是兼容的

kaixin-bai commented 1 year ago

如果都没有标注,应该是你转换数据的时候出了问题,可以检查下,比如标注的路径写错了

我所有图像的标注数据都在annotations.json文件中,可视化来看标注数据是没有问题的,数据集的划分是用的paddlex,但是paddlex划分数据集生成的json文件不符合coco的格式,疑问是paddlex的数据集划分方式paddledetection是否支持。

kaixin-bai commented 1 year ago

我使用以下脚本测试生成的标注文件的正确性,显示标注文件无误: 1.

# Source: https://www.immersivelimit.com/tutorials/create-coco-annotations-from-scratch/#create-custom-coco-dataset
import base64
import IPython
import json
import numpy as np
import os
import random
import requests
from io import BytesIO
from math import trunc
from PIL import Image as PILImage
from PIL import ImageDraw as PILImageDraw

# Load the dataset json
class CocoDataset():
    def __init__(self, annotation_path, image_dir):
        self.annotation_path = annotation_path
        self.image_dir = image_dir
        self.colors = ['blue', 'purple', 'red', 'green', 'orange', 'salmon', 'pink', 'gold',
                        'orchid', 'slateblue', 'limegreen', 'seagreen', 'darkgreen', 'olive',
                        'teal', 'aquamarine', 'steelblue', 'powderblue', 'dodgerblue', 'navy',
                        'magenta', 'sienna', 'maroon','blue', 'purple', 'red', 'green', 'orange', 'salmon', 'pink', 'gold',
                        'orchid', 'slateblue', 'limegreen', 'seagreen', 'darkgreen', 'olive',
                        'teal', 'aquamarine', 'steelblue', 'powderblue', 'dodgerblue', 'navy',
                        'magenta', 'sienna', 'maroon','blue', 'purple', 'red', 'green', 'orange', 'salmon', 'pink', 'gold',
                        'orchid', 'slateblue', 'limegreen', 'seagreen', 'darkgreen', 'olive',
                        'teal', 'aquamarine', 'steelblue', 'powderblue', 'dodgerblue', 'navy',
                        'magenta', 'sienna', 'maroon']

        json_file = open(self.annotation_path)
        self.coco = json.load(json_file)
        json_file.close()

        #self.process_info()
        #self.process_licenses()
        self.process_categories()
        self.process_images()
        self.process_segmentations()

    def display_info(self):
        print('Dataset Info:')
        print('=============')
        for key, item in self.info.items():
            print('  {}: {}'.format(key, item))

        requirements = [['description', str],
                        ['url', str],
                        ['version', str],
                        ['year', int],
                        ['contributor', str],
                        ['date_created', str]]
        for req, req_type in requirements:
            if req not in self.info:
                print('ERROR: {} is missing'.format(req))
            elif type(self.info[req]) != req_type:
                print('ERROR: {} should be type {}'.format(req, str(req_type)))
        print('')

    def display_licenses(self):
        print('Licenses:')
        print('=========')

        requirements = [['id', int],
                        ['url', str],
                        ['name', str]]
        for license in self.licenses:
            for key, item in license.items():
                print('  {}: {}'.format(key, item))
            for req, req_type in requirements:
                if req not in license:
                    print('ERROR: {} is missing'.format(req))
                elif type(license[req]) != req_type:
                    print('ERROR: {} should be type {}'.format(req, str(req_type)))
            print('')
        print('')

    def display_categories(self):
        print('Categories:')
        print('=========')
        for sc_key, sc_val in self.super_categories.items():
            print('  super_category: {}'.format(sc_key))
            for cat_id in sc_val:
                print('    id {}: {}'.format(cat_id, self.categories[cat_id]['name']))
            print('')

    def display_image(self, image_id, show_polys=True, show_bbox=True, show_labels=True, show_crowds=True, use_url=False):
        print('Image:')
        print('======')
        if image_id == 'random':
            image_id = random.choice(list(self.images.keys()))

        # Print the image info
        image = self.images[image_id]
        for key, val in image.items():
            print('  {}: {}'.format(key, val))

        # Open the image
        if use_url:
            image_path = image['coco_url']
            response = requests.get(image_path)
            image = PILImage.open(BytesIO(response.content))

        else:
            image_path = os.path.join(self.image_dir, image['file_name'])
            image = PILImage.open(image_path)

        buffered = BytesIO()
        image.save(buffered, format="PNG")
        img_str = "data:image/png;base64, " + base64.b64encode(buffered.getvalue()).decode()

        # Calculate the size and adjusted display size
        max_width = 900
        image_width, image_height = image.size
        adjusted_width = min(image_width, max_width)
        adjusted_ratio = adjusted_width / image_width
        adjusted_height = adjusted_ratio * image_height

        # Create list of polygons to be drawn
        polygons = {}
        bbox_polygons = {}
        rle_regions = {}
        poly_colors = {}
        labels = {}
        print('  segmentations ({}):'.format(len(self.segmentations[image_id])))
        for i, segm in enumerate(self.segmentations[image_id]):
            polygons_list = []
            if segm['iscrowd'] != 0:
                # Gotta decode the RLE
                px = 0
                x, y = 0, 0
                rle_list = []
                for j, counts in enumerate(segm['segmentation']['counts']):
                    if j % 2 == 0:
                        # Empty pixels
                        px += counts
                    else:
                        # Need to draw on these pixels, since we are drawing in vector form,
                        # we need to draw horizontal lines on the image
                        x_start = trunc(trunc(px / image_height) * adjusted_ratio)
                        y_start = trunc(px % image_height * adjusted_ratio)
                        px += counts
                        x_end = trunc(trunc(px / image_height) * adjusted_ratio)
                        y_end = trunc(px % image_height * adjusted_ratio)
                        if x_end == x_start:
                            # This is only on one line
                            rle_list.append({'x': x_start, 'y': y_start, 'width': 1 , 'height': (y_end - y_start)})
                        if x_end > x_start:
                            # This spans more than one line
                            # Insert top line first
                            rle_list.append({'x': x_start, 'y': y_start, 'width': 1, 'height': (image_height - y_start)})

                            # Insert middle lines if needed
                            lines_spanned = x_end - x_start + 1 # total number of lines spanned
                            full_lines_to_insert = lines_spanned - 2
                            if full_lines_to_insert > 0:
                                full_lines_to_insert = trunc(full_lines_to_insert * adjusted_ratio)
                                rle_list.append({'x': (x_start + 1), 'y': 0, 'width': full_lines_to_insert, 'height': image_height})

                            # Insert bottom line
                            rle_list.append({'x': x_end, 'y': 0, 'width': 1, 'height': y_end})
                if len(rle_list) > 0:
                    rle_regions[segm['id']] = rle_list  
            else:
                # Add the polygon segmentation
                for segmentation_points in segm['segmentation']:
                    segmentation_points = np.multiply(segmentation_points, adjusted_ratio).astype(int)
                    polygons_list.append(str(segmentation_points).lstrip('[').rstrip(']'))

            polygons[segm['id']] = polygons_list

            if i < len(self.colors):
                poly_colors[segm['id']] = self.colors[i]
            else:
                poly_colors[segm['id']] = 'white'

            bbox = segm['bbox']
            bbox_points = [bbox[0], bbox[1], bbox[0] + bbox[2], bbox[1],
                           bbox[0] + bbox[2], bbox[1] + bbox[3], bbox[0], bbox[1] + bbox[3],
                           bbox[0], bbox[1]]
            bbox_points = np.multiply(bbox_points, adjusted_ratio).astype(int)
            bbox_polygons[segm['id']] = str(bbox_points).lstrip('[').rstrip(']')

            labels[segm['id']] = (self.categories[segm['category_id']]['name'], (bbox_points[0], bbox_points[1] - 4))

            # Print details
            print('    {}:{}:{}'.format(segm['id'], poly_colors[segm['id']], self.categories[segm['category_id']]))

        # Draw segmentation polygons on image
        html = '<div class="container" style="position:relative;">'
        html += '<img src="{}" style="position:relative;top:0px;left:0px;width:{}px;">'.format(img_str, adjusted_width)
        html += '<div class="svgclass"><svg width="{}" height="{}">'.format(adjusted_width, adjusted_height)

        if show_polys:
            for seg_id, points_list in polygons.items():
                fill_color = poly_colors[seg_id]
                stroke_color = poly_colors[seg_id]
                for points in points_list:
                    html += '<polygon points="{}" style="fill:{}; stroke:{}; stroke-width:1; fill-opacity:0.5" />'.format(points, fill_color, stroke_color)

        if show_crowds:
            for seg_id, rect_list in rle_regions.items():
                fill_color = poly_colors[seg_id]
                stroke_color = poly_colors[seg_id]
                for rect_def in rect_list:
                    x, y = rect_def['x'], rect_def['y']
                    w, h = rect_def['width'], rect_def['height']
                    html += '<rect x="{}" y="{}" width="{}" height="{}" style="fill:{}; stroke:{}; stroke-width:1; fill-opacity:0.5; stroke-opacity:0.5" />'.format(x, y, w, h, fill_color, stroke_color)

        if show_bbox:
            for seg_id, points in bbox_polygons.items():
                fill_color = poly_colors[seg_id]
                stroke_color = poly_colors[seg_id]
                html += '<polygon points="{}" style="fill:{}; stroke:{}; stroke-width:1; fill-opacity:0" />'.format(points, fill_color, stroke_color)

        if show_labels:
            for seg_id, label in labels.items():
                color = poly_colors[seg_id]
                html += '<text x="{}" y="{}" style="fill:{}; font-size: 12pt;">{}</text>'.format(label[1][0], label[1][1], color, label[0])

        html += '</svg></div>'
        html += '</div>'
        html += '<style>'
        html += '.svgclass { position:absolute; top:0px; left:0px;}'
        html += '</style>'
        return html

    def process_info(self):
        self.info = self.coco['info']

    def process_licenses(self):
        self.licenses = self.coco['licenses']

    def process_categories(self):
        self.categories = {}
        self.super_categories = {}
        for category in self.coco['categories']:
            cat_id = category['id']
            super_category = category['supercategory']

            # Add category to the categories dict
            if cat_id not in self.categories:
                self.categories[cat_id] = category
            else:
                print("ERROR: Skipping duplicate category id: {}".format(category))

            # Add category to super_categories dict
            if super_category not in self.super_categories:
                self.super_categories[super_category] = {cat_id} # Create a new set with the category id
            else:
                self.super_categories[super_category] |= {cat_id} # Add category id to the set

    def process_images(self):
        self.images = {}
        for image in self.coco['images']:
            image_id = image['id']
            if image_id in self.images:
                print("ERROR: Skipping duplicate image id: {}".format(image))
            else:
                self.images[image_id] = image

    def process_segmentations(self):
        self.segmentations = {}
        for segmentation in self.coco['annotations']:
            image_id = segmentation['image_id']
            if image_id not in self.segmentations:
                self.segmentations[image_id] = []
            self.segmentations[image_id].append(segmentation)

annotation_path = "./annotations.json"
image_dir = "./"
coco_dataset = CocoDataset(annotation_path, image_dir)
# coco_dataset.display_info()
# coco_dataset.display_licenses()
coco_dataset.display_categories()

html = coco_dataset.display_image(5, use_url=False)
IPython.display.HTML(html)

输出为:

Categories:
=========
  super_category: background
    id 0: background

  super_category: obj1
    id 1: obj1

Image:
======
  coco_url: 
  height: 1544
  data_captured: 0
  width: 2064
  id: 5
  flickr_url: 
  file_name: 0005_texture.png
  license: 0
  segmentations (36):
    172:blue:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    173:purple:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    174:red:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    175:green:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    176:orange:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    177:salmon:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    178:pink:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    179:gold:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    180:orchid:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    181:slateblue:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    182:limegreen:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    183:seagreen:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    184:darkgreen:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    185:olive:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    186:teal:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    187:aquamarine:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    188:steelblue:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    189:powderblue:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    190:dodgerblue:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    191:navy:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    192:magenta:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    193:sienna:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    194:maroon:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    195:blue:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    196:purple:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    197:red:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    198:green:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    199:orange:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    200:salmon:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    201:pink:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    202:gold:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    203:orchid:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    204:slateblue:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    205:limegreen:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    206:seagreen:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}
    207:darkgreen:{'id': 1, 'name': 'obj1', 'supercategory': 'obj1'}

image

2.

import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from PIL import Image
import requests
from pycocotools.coco import COCO

def main():
    coco_annotation_file_path = "./annotations.json"
    coco_annotation = COCO(annotation_file=coco_annotation_file_path)

    # Category IDs.
    cat_ids = coco_annotation.getCatIds()
    print(f"Number of Unique Categories: {len(cat_ids)}")
    print("Category IDs:")
    print(cat_ids)  # The IDs are not necessarily consecutive.

    # All categories.
    cats = coco_annotation.loadCats(cat_ids)
    cat_names = [cat["name"] for cat in cats]
    print("Categories Names:")
    print(cat_names)

    # Category ID -> Category Name.
    query_id = cat_ids[0]
    query_annotation = coco_annotation.loadCats([query_id])[0]
    query_name = query_annotation["name"]
    query_supercategory = query_annotation["supercategory"]
    print("Category ID -> Category Name:")
    print(
        f"Category ID: {query_id}, Category Name: {query_name}, Supercategory: {query_supercategory}"
    )

    # Category Name -> Category ID.
    query_name = cat_names[1]
    query_id = coco_annotation.getCatIds(catNms=[query_name])[0]
    print("Category Name -> ID:")
    print(f"Category Name: {query_name}, Category ID: {query_id}")

    # Get the ID of all the images containing the object of the category.
    img_ids = coco_annotation.getImgIds(catIds=[query_id])
    print(f"Number of Images Containing {query_name}: {len(img_ids)}")

    # Pick one image.
    random_pick = 0
    img_id = img_ids[random_pick]
    img_info = coco_annotation.loadImgs([img_id])[0]
    img_file_name = img_info["file_name"]
    img_url = img_info["coco_url"]
    print(
        f"Image ID: {img_id}, File Name: {img_file_name}, Image URL: {img_url}"
    )

    # Get all the annotations for the specified image.
    ann_ids = coco_annotation.getAnnIds(imgIds=[img_id], iscrowd=None)
    anns = coco_annotation.loadAnns(ann_ids)
    print(f"Annotations for Image ID {img_id}:")
    print(anns)

    # Use URL to load image.
    im = Image.open("./{}_texture.png".format(str(random_pick).zfill(4)))

    # Save image and its labeled version.
    plt.axis("off")
    plt.imshow(np.asarray(im))
    plt.savefig(f"{img_id}.jpg", bbox_inches="tight", pad_inches=0)
    # Plot segmentation and bounding box.
    coco_annotation.showAnns(anns, draw_bbox=True)
    plt.savefig(f"{img_id}_annotated.jpg", bbox_inches="tight", pad_inches=0)
    return
if __name__ == "__main__":
    main()

0_annotated

问题: 1.目前paddledetection是否支持COCO数据集中的实例分割以rle标注,待训练的数据集为目标检测,应该不会影响吧? 2.既然pycocotools可以正确读取标注文件,为什么paddledetection会报错?

kaixin-bai commented 1 year ago

这里是一个测试版的标注文件和案例图像 img_and_annotations.zip

wangxinxin08 commented 1 year ago

这里是一个测试版的标注文件和案例图像 img_and_annotations.zip 你这个json标注文件都没有这张图片,你是不是生成错了。这种问题建议自行debug一下,很容易就能发现了

kaixin-bai commented 1 year ago

这里是一个测试版的标注文件和案例图像 img_and_annotations.zip 你这个json标注文件都没有这张图片,你是不是生成错了。这种问题建议自行debug一下,很容易就能发现了

有的啊,标注文件里['images']的第一张id=0的图像就是压缩包中配的那张图

kaixin-bai commented 1 year ago

Update:数据集不变,使用mmdetection的yolov3可以成功训练

问题应该出在paddledetection加载数据集的时候,后续再继续调试

wangxinxin08 commented 1 year ago

我重新帮你debug了下,你生成的标注默认打了crowd的标签,所以被过滤掉了,具体位置在这里:https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/ppdet/data/source/coco.py#L132 设置load_crowd即可

kaixin-bai commented 1 year ago

我重新帮你debug了下,你生成的标注默认打了crowd的标签,所以被过滤掉了,具体位置在这里:https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/ppdet/data/source/coco.py#L132 设置load_crowd即可

这个变量好像改成True和False都不行,我之前测试过,我再测试一下

wangxinxin08 commented 1 year ago

写在这下面:https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/configs/datasets/coco_detection.yml#L9

kaixin-bai commented 1 year ago

我的coco_detection_mps1texture.yml文件如下:

metric: COCO
num_classes: 1

TrainDataset:
  !COCODataSet
    image_dir: train_texture
    anno_path: train.json
    dataset_dir: /data-r10/kb/Projects/SynDataGen/datasets/mps1/
    data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']

EvalDataset:
  !COCODataSet
    image_dir: val_texture
    anno_path: val.json
    dataset_dir: /data-r10/kb/Projects/SynDataGen/datasets/mps1/

TestDataset:
  !ImageFolder
    anno_path: val.json # also support txt (like VOC's label_list.txt)
    dataset_dir: /data-r10/kb/Projects/SynDataGen/datasets/mps1/ # if set, anno_path will be 'dataset_dir/anno_path'

配置文件中给定了'is_crowd',读入之后在ppdet/data/source/coco.py的第132行中显示self.load_crowdFalse,目前不具备调试条件,直接在coco.py的第51行把load_crowd=True,默认值改为True,结果可用。

wangxinxin08 commented 1 year ago

@kaixin-bai 你理解错了,is_crowd表示的是样本中是否包含is_crowd字段,你需要在https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/configs/datasets/coco_detection.yml#L9 下面添加的是 load_crowd: true

kaixin-bai commented 1 year ago

@kaixin-bai 你理解错了,is_crowd表示的是样本中是否包含is_crowd字段,你需要在https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/configs/datasets/coco_detection.yml#L9 下面添加的是 load_crowd: true

如上所说在配置文件中增加load_crowd: true报错依旧,True和true都试过了。

metric: COCO
num_classes: 1

TrainDataset:
  !COCODataSet
    image_dir: train_texture
    anno_path: train.json
    dataset_dir: /data-r10/kb/Projects/SynDataGen/datasets/mps1/
    data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
    load_crowd: True

EvalDataset:
  !COCODataSet
    image_dir: val_texture
    anno_path: val.json
    dataset_dir: /data-r10/kb/Projects/SynDataGen/datasets/mps1/
    load_crowd: True

TestDataset:
  !ImageFolder
    anno_path: val.json # also support txt (like VOC's label_list.txt)
    dataset_dir: /data-r10/kb/Projects/SynDataGen/datasets/mps1/ # if set, anno_path will be 'dataset_dir/anno_path'
    load_crowd: True
wangxinxin08 commented 1 year ago

我这边试了下,load_crowd: true是没问题的,你可能修改配置文件后没有保存,或者运行的不是当前配置文件。另外,如果是pip install paddledet或者python setup.py install了的话,建议先uninstall一下

kaixin-bai commented 1 year ago

我这边试了下,load_crowd: true是没问题的,你可能修改配置文件后没有保存,或者运行的不是当前配置文件。另外,如果是pip install paddledet或者python setup.py install了的话,建议先uninstall一下

您这边使用的paddlepaddle是哪个版本,使用的paddledetection是那个branch啊?

wangxinxin08 commented 1 year ago

@kaixin-bai 我这边使用的是PaddleDetection 2.5,PaddleDetection 2.3/2.4应该也是可以的

kaixin-bai commented 1 year ago

更新:当编码信息为RLE时,paddledetectioncoco.py存在bug,目前来看在最新的branch中已经进行了修复,但是如果使用pycocotools的话,pycocotools中依然会有bug导致训练RLE编码的实例分割存在问题。