liruilong940607 / OCHumanApi

API for the dataset proposed in "Pose2Seg: Detection Free Human Instance Segmentation" @ CVPR2019.
http://www.liruilong.cn/projects/pose2seg/index.html
MIT License
257 stars 44 forks source link

Why only some people are marked? #2

Closed jianlong-yuan closed 5 years ago

jianlong-yuan commented 5 years ago

Why only some people are marked, others in a picture are not marked?

image

liruilong940607 commented 5 years ago

We annotated all the bounding-boxes. But because mask & key-point annotation is a quite time-consuming job, a few instances have no such annotation. (It doesn't affect the evaluation.) Our visualisation code shows instances with full annotations here.

chamecall commented 1 year ago

We annotated all the bounding-boxes. But because mask & key-point annotation is a quite time-consuming job, a few instances have no such annotation. (It doesn't affect the evaluation.) Our visualisation code shows instances with full annotations here.

hello, I'm wondering how you've obtained evaluation with the face that some instances not annoted? like for example you've got predictions of two instances for the image above but we have only one gt instance.

what's the logic to match specific prediction instance with its according gt instance? Something like bounding box closeness or what?

liruilong940607 commented 1 year ago

The coco evaluation toolbox can tolerate case like this.

I can't remember precisely but the logic is something like for each gt, you find a prediction that has max iou with it, and consider it as a match.