cocodataset / cocoapi

COCO API - Dataset @ http://cocodataset.org/
Other
6.05k stars 3.75k forks source link

how to get result.json file for evaluation. #343

Open niazahamd89 opened 4 years ago

niazahamd89 commented 4 years ago

I train the model on both train and evl coco dataset as a result my model generates output in .ckpt format which i prior use for estimation of pose but i am confuse how i can get the .json file for evaluation of model to know the AP or AR. is the model generates this file by self?

prathamsss commented 3 years ago

For getting metrics firstly you will have to use Pycocotools for evaluation. It takes ground truth and prediction as an input and gives AP. Now for this, you will to write your custom script for your output results to be formatted in COCO style.

https://cocodataset.org/#format-data

BishwaBS commented 3 years ago

1) If you look at the bottom of this class, you will see the summarize() function being called. You would need to instantiate this class (aka create a variable to store the results by this function) and return the variable. Please note that the output variable will be a numpy.ndarray. You can convert this array to list or json as per your need.

`class CocoDataset(utils.Dataset): def load_coco(self, dataset_path,annotation_path, year=None, class_ids=None, class_map=None, return_coco=False, auto_download=False): """Load a subset of the COCO dataset. dataset_dir: The root directory of the COCO dataset. subset: What to load (train, val, minival, valminusminival) year: What dataset year to load (2014, 2017) as a string, not an integer class_ids: If provided, only loads images that have the given classes. class_map: TODO: Not implemented yet. Supports maping classes from different datasets to the same class ID. return_coco: If True, returns the COCO object. auto_download: Automatically download and unzip MS-COCO images and annotations """

    coco = COCO(annotation_path)

    image_dir = dataset_path

    # Load all classes or a subset?
    if not class_ids:
        # All classes
        class_ids = sorted(coco.getCatIds())

    # All images or a subset?
    if class_ids:
        image_ids = []
        for id in class_ids:
            image_ids.extend(list(coco.getImgIds(catIds=[id])))
        # Remove duplicates
        image_ids = list(set(image_ids))
    else:
        # All images
        image_ids = list(coco.imgs.keys())

    # Add classes
    for i in class_ids:
        self.add_class("coco", i, coco.loadCats(i)[0]["name"])

    # Add images
    for i in image_ids:
        self.add_image(
            "coco", image_id=i,
            path=os.path.join(image_dir, coco.imgs[i]['file_name']),
            width=coco.imgs[i]["width"],
            height=coco.imgs[i]["height"],
            annotations=coco.loadAnns(coco.getAnnIds(
                imgIds=[i], catIds=class_ids, iscrowd=None)))
    if return_coco:
        return coco

def build_coco_results(dataset, image_ids, rois, class_ids, scores, masks): """Arrange resutls to match COCO specs in http://cocodataset.org/#format """

If no results, return an empty list

if rois is None:
    return []

results = []
for image_id in image_ids:
    # Loop through detections
    for i in range(rois.shape[0]):
        class_id = class_ids[i]
        score = scores[i]
        bbox = np.around(rois[i], 1)
        mask = masks[:, :, i]

        result = {
            "image_id": image_id,
            "category_id": dataset.get_source_class_id(class_id, "coco"),
            "bbox": [bbox[1], bbox[0], bbox[3] - bbox[1], bbox[2] - bbox[0]],
            "score": score,
            "segmentation": maskUtils.encode(np.asfortranarray(mask))
        }
        results.append(result)
return results

def evaluate_coco(model, dataset, coco, eval_type="bbox", limit=0, image_ids=None): """Runs official COCO evaluation. dataset: A Dataset object with valiadtion data eval_type: "bbox" or "segm" for bounding box or segmentation evaluation limit: if not 0, it's the number of images to use for evaluation """

Pick COCO images from the dataset

image_ids = image_ids or dataset.image_ids

# Limit to a subset
if limit:
    image_ids = image_ids[:limit]

# Get corresponding COCO image IDs.
coco_image_ids = [dataset.image_info[id]["id"] for id in image_ids]

t_prediction = 0
t_start = time.time()

results = []
for i, image_id in enumerate(image_ids):
    # Load image
    image = dataset.load_image(image_id)

    # Run detection
    t = time.time()
    r = model.detect([image], verbose=0)[0]
    t_prediction += (time.time() - t)

    # Convert results to COCO format
    # Cast masks to uint8 because COCO tools errors out on bool
    image_results = build_coco_results(dataset, coco_image_ids[i:i + 1],
                                       r["rois"], r["class_ids"],
                                       r["scores"],
                                       r["masks"].astype(np.uint8))
    results.extend(image_results)

# Load results. This modifies results with additional attributes.
coco_results = coco.loadRes(results)

# Evaluate
cocoEval = COCOeval(coco, coco_results, eval_type)
cocoEval.params.imgIds = coco_image_ids
cocoEval.evaluate()
cocoEval.accumulate()
cocoEval.summarize()`

store the results into a variable

my_results=cocoEval.summarize()

return my_results

2) You might need to make some change in class COCOeval (you can fetch this class from pycocotools.cocoeval) Add return self.stats at the end as shown below if not self.eval: raise Exception('Please run accumulate() first') iouType = self.params.iouType if iouType == 'segm' or iouType == 'bbox': summarize = _summarizeDets elif iouType == 'keypoints': summarize = _summarizeKps self.stats = summarize()

add here

``

hebangwen commented 3 years ago

@DigitalPlantScience Hi, I have a question about the variable score in your code. What does score mean? In my view, score maybe means model confidence score in object detection. But I don't know what it means in human keypoint tasks. Any reply or answer would be appreciate.

BishwaBS commented 3 years ago

@BangwenHe The score is the confidence score. That's part of the source code, not something I added or modified. I am sorry but I don't have an idea what this means for the keypoints problem. Hope someone clarifies your query. Good luck.

hebangwen commented 3 years ago

@BangwenHe The score is the confidence score. That's part of the source code, not something I added or modified. I am sorry but I don't have an idea what this means for the keypoints problem. Hope someone clarifies your query. Good luck.

Thanks for your reply. I figured it out several days ago by debug mode. For keypoints problem, It means the average confidence score for all visible keypoints.