Seanseattle / SMIS

Semantically Multi-modal Image Synthesis(CVPR 2020)
Other
322 stars 48 forks source link

How to obtain mIoU and Acc on DeepFashion? Which pretrained model are you used? #7

Open Ha0Tang opened 4 years ago

Seanseattle commented 4 years ago

We used off-the-shelf human parser "Instance-level human parsing via part grouping network" for the DeepFashion dataset. And reorganize the map into eight categories: hair, face, skin (including hands and legs), top-clothes, bottom-clothes, socks, shoes, and background.

Ha0Tang commented 4 years ago

Which github code are you used? and which pertained model (trained on LIP or Deepfashion or other datasets) are you used?

Since you did not have ground truth segmentation maps, how did you calculate mIoU and Acc?

Seanseattle commented 4 years ago

Hi, the Github code is CIHP_PGN. We used the pretrained model on CIHP to get segmentation maps.

Ha0Tang commented 4 years ago

In the link you provided, I only found the pretrained model trained on the Crowd Instance-level Human Parsing (CIHP) Dataset rather than the LIP dataset.

Seanseattle commented 4 years ago

Yes, it is the CIHP dataset. Sorry about that.

Ha0Tang commented 4 years ago

Did you extract 8 labels or 20 labels on the generated images to calculate mIoU and Acc?

I know you extracted 8 labels on real images as ground truths.