Open yyvhang opened 1 year ago
Hi, in the paper, we use an existing human pose estimation algorithm to first extract person proposals, and then associate contacts to different persons. This follows the intuition that the contact area must occur on the human body, so to be more specific, you can use the intersection of the contact map and the human bonding boxes or segmentations.
However, this still potentially raises some problems when multiple persons and heavy occlusions exist in the image, as we didn't distinguish different instances of the contacts using the currently released data and annotations (more of a "semantic segmentation" setting). We do have this raw annotations, and are planning to release a V2 sometime in September.
Hope this helps. Best.
Hi, thanks for the great work. As mentioned in the paper, 2D annotations can be converted into 3D annotations through the defined part, but some images have multiple people contact annotations. How do you handle this situation in the experiment?
Hope to get your reply. Thanks!