Closed santosh9sanjeev closed 4 years ago
Hi @santosh9sanjeev , to run the model with custom internet images, make sure you have the following:
Hope that helps. Thank you.
Thank you @minar09 for your help I completed all the steps as you mentioned but I am facing problems with bit depth. The image-parse generated by CIHP_PGN is generating an image of bit depth 24 whereas the one needed is bit depth is 8. Using pillow I converted the bit depth from 24 to 8 but then the image mask generated had weird result... It actually did not correctly generate the mask.. Any idea how to proceed or to solve this error.. It would be very helpful..
Thank you @minar09 for your help I completed all the steps as you mentioned but I am facing problems with bit depth. The image-parse generated by CIHP_PGN is generating an image of bit depth 24 whereas the one needed is bit depth is 8. Using pillow I converted the bit depth from 24 to 8 but then the image mask generated had weird result... It actually did not correctly generate the mask.. Any idea how to proceed or to solve this error.. It would be very helpful..
It is not clear how weird result for the mask generated. Can you upload or share the code for cp_dataset or the result of CIHP_PGN. May be the file format is different from VITON dataset.
Both the files are of png format, having dimension 192,256 and bit depth 8 When i ran the code for these images i got the following masked images
Hi @santosh9sanjeev , please make sure to use grayscale ([0, 20]) files as input, not the RGB files from the CIHP_PGN generated segmentation. Also, I think you don't need to create binary shape masks directly before testing for CIHP_PGN (although there is no harm in that, creating binary masks script was made for VITON/LIP style segmentation), since CIHP_PGN produces better segmentation with torso-neck label. You can get the body shape for input in cp_dataset.py
as following code:
im_parse = Image.open(osp.join(self.data_path, 'image-parse', parse_name)) # read segmentation
parse_array = np.array(im_parse) # convert to numpy array
parse_shape = (parse_array > 0).astype(np.float32) # get binary body shape
Thank you so much minar09 for your help... I just have another small doubt actually the test.py code requires image-parse-new and image-mask folder.. should i change the code where there is image-mask and image-parse-new to image-parse, as I am not generating the image-mask and image-parse-new, or is ther any better alternative? Thank you once again for your help..
Thank you so much minar09 for your help... I just have another small doubt actually the test.py code requires image-parse-new and image-mask folder.. should i change the code where there is image-mask and image-parse-new to image-parse, as I am not generating the image-mask and image-parse-new, or is ther any better alternative? Thank you once again for your help..
Yes, just change the code if you don't need them. Only updating in cp_dataset.py
should be enough.
Closing the issue as its resolved. Feel free to reopen in case there are still problems. Results with custom images: #23
Hi @santosh9sanjeev , please make sure to use grayscale ([0, 20]) files as input, not the RGB files from the CIHP_PGN generated segmentation. Also, I think you don't need to create binary shape masks directly before testing for CIHP_PGN (although there is no harm in that, creating binary masks script was made for VITON/LIP style segmentation), since CIHP_PGN produces better segmentation with torso-neck label. You can get the body shape for input in
cp_dataset.py
as following code:
im_parse = Image.open(osp.join(self.data_path, 'image-parse', parse_name)) # read segmentation
parse_array = np.array(im_parse) # convert to numpy array
parse_shape = (parse_array > 0).astype(np.float32) # get binary body shape
I am trying with segmentation generated from Graphonomy. Any suggestion on how to convert that to grayscale with 0-20 classes?
Actually, any image segmentation network should originally generate grayscale output, so please check the actual network output.
Hi @santosh9sanjeev @minar09
Thanks for the information! I'm able to test on custom image with some manual work on JSON keypoints. The openpose generated JSON file has two different fields like face_keypoints_2d and face_keypoints_3d. And In cp_vton_plus model JSON contains field like face_keypoints only. So, I just want to confirm that the JSONs present in model are manually modified or directly generated from openpose?
If it is directly generated then can you help me to understand what changes you had done or what run cmd you had used.
Thanks, Yash
@yashp0103 , CP-VTON+ model directly uses openpose-generated keypoints, no modification is needed. And the face_keypoints are not used in this model, you can ignore them. If your joints have pose_keypoints_2d, you can directly use it by just changing this line: https://github.com/minar09/cp-vton-plus/blob/master/cp_dataset.py#L152 to pose_data = pose_label['people'][0]['pose_keypoints_2d']
Thank you so much @minar09 It's working!
Thank you so much minar09 for your help... I use LIP_JPPNet for image-pars.But the results are not good at all. Please see for this image: The following results are obtained: If I was expecting this output: Thank you very much for your help: Thank
00989174286532 Whats App
@Amin-asadii , you can try CIHP-PGN pre-trained model for parsing, which should give better results.
Python 91.0%
MATLAB 9.0%
Owner @minar09 Do I have to have matlab installed? How should I use it?
@Amin-asadii , no need to install Matlab for CP-VTON+.
@minar09 Hello dear friend, thank you for your help Please attach the openpose COCO-18 Python link
tank you for help you
Matiur Rahman Minar @.***> در تاریخ شنبه ۱۸ سپتامبر ۲۰۲۱ ساعت ۹:۰۴ نوشت:
@Amin-asadii https://github.com/Amin-asadii , here you go: https://github.com/CMU-Perceptual-Computing-Lab/openpose/releases/download/v1.7.0/openpose-1.7.0-binaries-win64-gpu-python3.7-flir-3d_recommended.zip
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/minar09/cp-vton-plus/issues/15#issuecomment-922178097, or unsubscribe https://github.com/notifications/unsubscribe-auth/ATDBTCFSFUDDECQ3FO4MTGLUCQJGZANCNFSM4QEY3REQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
I am trying to give inputs from internet like take an image of a person and an image of the cloth.. I tried running the dataset_neck_skin_connection.py How should I get the image-parse for the input image.. pls guide me.. I am a beginner