minar09 / cp-vton-plus

Official implementation for "CP-VTON+: Clothing Shape and Texture Preserving Image-Based Virtual Try-On", CVPRW 2020
https://minar09.github.io/cpvtonplus/
MIT License
352 stars 121 forks source link

Image parsing #15

Closed santosh9sanjeev closed 4 years ago

santosh9sanjeev commented 4 years ago

I am trying to give inputs from internet like take an image of a person and an image of the cloth.. I tried running the dataset_neck_skin_connection.py How should I get the image-parse for the input image.. pls guide me.. I am a beginner

minar09 commented 4 years ago

Hi @santosh9sanjeev , to run the model with custom internet images, make sure you have the following:

  1. image (image of a person)
  2. image-parse (you can generate with LIP_JPPNet or CIHP_PGN pretrained networks from the person image. Then you can run dataset_neck_skin_connection.py for LIP parsing. Finally run body_binary_masking.py)
  3. cloth (in-shop cloth image)
  4. cloth-mask (binary mask of cloth image, you can generate it with simple pillow/opencv function)
  5. pose (pose keypoints of the person, generate with openpose COCO 18 model )

Hope that helps. Thank you.

santosh9sanjeev commented 4 years ago

Thank you @minar09 for your help I completed all the steps as you mentioned but I am facing problems with bit depth. The image-parse generated by CIHP_PGN is generating an image of bit depth 24 whereas the one needed is bit depth is 8. Using pillow I converted the bit depth from 24 to 8 but then the image mask generated had weird result... It actually did not correctly generate the mask.. Any idea how to proceed or to solve this error.. It would be very helpful..

thaithanhtuan commented 4 years ago

Thank you @minar09 for your help I completed all the steps as you mentioned but I am facing problems with bit depth. The image-parse generated by CIHP_PGN is generating an image of bit depth 24 whereas the one needed is bit depth is 8. Using pillow I converted the bit depth from 24 to 8 but then the image mask generated had weird result... It actually did not correctly generate the mask.. Any idea how to proceed or to solve this error.. It would be very helpful..

It is not clear how weird result for the mask generated. Can you upload or share the code for cp_dataset or the result of CIHP_PGN. May be the file format is different from VITON dataset.

santosh9sanjeev commented 4 years ago

person-final-1 person-final-2 Both the files are of png format, having dimension 192,256 and bit depth 8 When i ran the code for these images i got the following masked images person-final-2 person-final-1

minar09 commented 4 years ago

Hi @santosh9sanjeev , please make sure to use grayscale ([0, 20]) files as input, not the RGB files from the CIHP_PGN generated segmentation. Also, I think you don't need to create binary shape masks directly before testing for CIHP_PGN (although there is no harm in that, creating binary masks script was made for VITON/LIP style segmentation), since CIHP_PGN produces better segmentation with torso-neck label. You can get the body shape for input in cp_dataset.py as following code:

im_parse = Image.open(osp.join(self.data_path, 'image-parse', parse_name)) # read segmentation parse_array = np.array(im_parse) # convert to numpy array parse_shape = (parse_array > 0).astype(np.float32) # get binary body shape

santosh9sanjeev commented 4 years ago

Thank you so much minar09 for your help... I just have another small doubt actually the test.py code requires image-parse-new and image-mask folder.. should i change the code where there is image-mask and image-parse-new to image-parse, as I am not generating the image-mask and image-parse-new, or is ther any better alternative? Thank you once again for your help..

minar09 commented 4 years ago

Thank you so much minar09 for your help... I just have another small doubt actually the test.py code requires image-parse-new and image-mask folder.. should i change the code where there is image-mask and image-parse-new to image-parse, as I am not generating the image-mask and image-parse-new, or is ther any better alternative? Thank you once again for your help..

Yes, just change the code if you don't need them. Only updating in cp_dataset.py should be enough.

minar09 commented 4 years ago

Closing the issue as its resolved. Feel free to reopen in case there are still problems. Results with custom images: #23

Pritam-N commented 3 years ago

Hi @santosh9sanjeev , please make sure to use grayscale ([0, 20]) files as input, not the RGB files from the CIHP_PGN generated segmentation. Also, I think you don't need to create binary shape masks directly before testing for CIHP_PGN (although there is no harm in that, creating binary masks script was made for VITON/LIP style segmentation), since CIHP_PGN produces better segmentation with torso-neck label. You can get the body shape for input in cp_dataset.py as following code:

im_parse = Image.open(osp.join(self.data_path, 'image-parse', parse_name)) # read segmentation parse_array = np.array(im_parse) # convert to numpy array parse_shape = (parse_array > 0).astype(np.float32) # get binary body shape

I am trying with segmentation generated from Graphonomy. Any suggestion on how to convert that to grayscale with 0-20 classes?

minar09 commented 3 years ago

Actually, any image segmentation network should originally generate grayscale output, so please check the actual network output.

yashp0103 commented 3 years ago

Hi @santosh9sanjeev @minar09

Thanks for the information! I'm able to test on custom image with some manual work on JSON keypoints. The openpose generated JSON file has two different fields like face_keypoints_2d and face_keypoints_3d. And In cp_vton_plus model JSON contains field like face_keypoints only. So, I just want to confirm that the JSONs present in model are manually modified or directly generated from openpose?

If it is directly generated then can you help me to understand what changes you had done or what run cmd you had used.

Thanks, Yash

minar09 commented 3 years ago

@yashp0103 , CP-VTON+ model directly uses openpose-generated keypoints, no modification is needed. And the face_keypoints are not used in this model, you can ignore them. If your joints have pose_keypoints_2d, you can directly use it by just changing this line: https://github.com/minar09/cp-vton-plus/blob/master/cp_dataset.py#L152 to pose_data = pose_label['people'][0]['pose_keypoints_2d']

yashp0103 commented 3 years ago

Thank you so much @minar09 It's working!

Amin-asadii commented 3 years ago

Thank you so much minar09 for your help... I use LIP_JPPNet for image-pars.But the results are not good at all. Please see for this image: 000001_01 The following results are obtained: 000001_0_vis 000001_0 If I was expecting this output: 000001_12 (1) 000001_12 (2) Thank you very much for your help: Thank

Amin-asadii commented 3 years ago

00989174286532 Whats App

minar09 commented 3 years ago

@Amin-asadii , you can try CIHP-PGN pre-trained model for parsing, which should give better results.

Amin-asadii commented 3 years ago

Python 91.0%

MATLAB 9.0%

Owner @minar09 Do I have to have matlab installed? How should I use it?

minar09 commented 3 years ago

@Amin-asadii , no need to install Matlab for CP-VTON+.

Amin-asadii commented 3 years ago

@minar09 Hello dear friend, thank you for your help Please attach the openpose COCO-18 Python link

minar09 commented 3 years ago

@Amin-asadii , here you go: https://github.com/CMU-Perceptual-Computing-Lab/openpose/releases/download/v1.7.0/openpose-1.7.0-binaries-win64-gpu-python3.7-flir-3d_recommended.zip

Amin-asadii commented 3 years ago

tank you for help you

‫‪Matiur Rahman Minar‬‏ @.***‬‏> در تاریخ شنبه ۱۸ سپتامبر ۲۰۲۱ ساعت ۹:۰۴ نوشت:‬

@Amin-asadii https://github.com/Amin-asadii , here you go: https://github.com/CMU-Perceptual-Computing-Lab/openpose/releases/download/v1.7.0/openpose-1.7.0-binaries-win64-gpu-python3.7-flir-3d_recommended.zip

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/minar09/cp-vton-plus/issues/15#issuecomment-922178097, or unsubscribe https://github.com/notifications/unsubscribe-auth/ATDBTCFSFUDDECQ3FO4MTGLUCQJGZANCNFSM4QEY3REQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.