I'm trying to replicate the preprocessing steps and the results I have obtained have given me some differences.
In the 4 step cloth mask: I ran the model that is referenced but the results that it gives me are the clothes in color and without background. But the images that there are in the dataset are with the clothes in white and all the black background.
cloth-mask in your datasetcloth-mask model generates to me
In the 2 step Human parse: Can you explain better how do you get the final image? When I run the model I had two folders cihp_edge_maps and cihp_parsing_maps this with two different images with and without vis alias. Then I don't understand well the steps that you defined here.
I'm trying to replicate the preprocessing steps and the results I have obtained have given me some differences.
In the 4 step cloth mask: I ran the model that is referenced but the results that it gives me are the clothes in color and without background. But the images that there are in the dataset are with the clothes in white and all the black background. cloth-mask in your dataset cloth-mask model generates to me
In the 2 step Human parse: Can you explain better how do you get the final image? When I run the model I had two folders
cihp_edge_maps
andcihp_parsing_maps
this with two different images with and withoutvis
alias. Then I don't understand well the steps that you defined here.