Open Ha0Tang opened 4 years ago
Which github code are you used? and which pertained model (trained on LIP or Deepfashion or other datasets) are you used?
Since you did not have ground truth segmentation maps, how did you calculate mIoU and Acc?
Hi, the Github code is CIHP_PGN. We used the pretrained model on CIHP to get segmentation maps.
In the link you provided, I only found the pretrained model trained on the Crowd Instance-level Human Parsing (CIHP) Dataset rather than the LIP dataset.
Yes, it is the CIHP dataset. Sorry about that.
Did you extract 8 labels or 20 labels on the generated images to calculate mIoU and Acc?
I know you extracted 8 labels on real images as ground truths.
We used off-the-shelf human parser "Instance-level human parsing via part grouping network" for the DeepFashion dataset. And reorganize the map into eight categories: hair, face, skin (including hands and legs), top-clothes, bottom-clothes, socks, shoes, and background.