Closed abc123yuanrui closed 4 years ago
Hi @abc123yuanrui , Have you considered the label numbering of the segmentation output. There are only 14 labels in the segmentation mask. Try printing unique value for the label you are using. It shouldn't be more than 13.
0 -> Background
1 -> Hair
4 -> Upclothes
5 -> Left-shoe
6 -> Right-shoe
7 -> Noise
8 -> Pants
9 -> Left_leg
10 -> Right_leg
11 -> Left_arm
12 -> Face
13 -> Right_arm
try below code
import cv2
import numpy as np
img=cv2.imread('path to label')
print(np.unique(img))
Hi @MotiBaadror , Thanks for your answer. I have figured the issue was caused by my label number not matching. The input segmentation I used was 21 labels (20 LIP + neck). However I set it as 20. Sorry I forgot to close this issue.
@abc123yuanrui @MotiBaadror Hi, Can you please tell what is label number 7 named 'noise'? and which segmentation model you used? It would be very helpful.
@abc123yuanrui @MotiBaadror Hi, Can you please tell what is label number 7 named 'noise'? and which segmentation model you used? It would be very helpful. Hi @AjayMudhai The label number 7 is random mask noise for covering the cloth and image. However, no such info in customised segmentation results generated by either CIHP or Self-Correction-Human-Parsing or other LIP relevant segmentation methods. According to the author of ACGPN, you can omit channel 7 when u do customised training/testing. I personally used Self-Correction-Human-Parsing for LIP and further processed with cp-vton-plus neck detection which contains 21 labels.
@abc123yuanrui @AjayMudhai @MotiBaadror @Mushahid2521 @ChongjianGE @taliegemen @ecarist How do I generate a file like those in the folder /DeepFashion_Try_On/Data_preprocessing/test_label ? I followed your directions to use https://github.com/PeikeLi/Self-Correction-Human-Parsing trained on LIP dataset and got below?
The files in /DeepFashion_Try_On/Data_preprocessing/test_label by default look black with a silhouette.
Thanks
@frankiesquared Hi, u jst need to save the image before applying palette.Make below shown changes in simple_extractor.py and u will get ur results.
`
output_img = Image.fromarray(np.asarray(parsing_result, dtype=np.uint8))
#output_img.putpalette(palette) ### comment out this line
output_img.save(parsing_result_path)
`
Hope it helps ! :-)
When I trained the network with LIP dataset (20 labels' segmentation input), this error raised Based on https://github.com/NVlabs/SPADE/issues/57 it should be some channel problem, but I believe I had changed relevant channels and total channel numbers except noise channel. Since Lip dataset doesn't have a counterpart, I used the dress channel and it seems not work properly. Any idea to fix this issue?