Closed karthik1997 closed 3 years ago
Hi @karthik1997 , looks like you have input mismatch. Please read the readme file for testing custom images and try accordingly. If the error still persists, then please comment with your inputs with their channels/dimensions.
The inputs are according to the instructions. I am sharing the images here:
@minar09
Hi @karthik1997 , its very difficult to understand the issue just by looking at the images. Your input images seem okay, but I don't know their input channels. Please check the inputs with their dimensions. For example, segmentation input will be the [0,20] grayscale input, not like the one showed here (RGB). I think you can find the origin of the issue with a little debugging. Good luck.
If its still giving error, you can comment here with the full error traceback. That way it will be easier to understand. Thank you.
@minar09 I used graphonomy for segmentation , will that be an issue? The input image is RGB and output image is the segmentation image which I posted. Converted that 24bit image into 8 bit according to the prerequisite.
What is the label numbers of Graphonomy? Please check if its similar to LIP/PGN. Also, there should be a generated grayscale [0,20] output segmentation file, you don't need to convert from RGB. See the difference below or check the VITON dataset segmentation files.
@minar09 hi , Thank you for the help. the segmentation labels are same in graphonomy too. I tried with them also adding the images here:
attaching the entire traceback here:
Traceback (most recent call last): File "test.py", line 225, in
Based on the traceback, the error comes from the size of agnostic. It should be 22 channels. But the agnostic of your custom image return 28 channels. You can check the size of these data: by debugging cp_dataset.py line 178. (print out the shape of these inputs) Check: shape: 1 channel; im_h: 3 channels; pose_map: 18 channels. Maybe one of your inputs got the wrong size. Regards.
@thaithanhtuan @minar09 ,
I found the error. I am getting the shape of pose_map as 24 instead of 18. How can i solve this issue?
@thaithanhtuan @minar09 ,
i used the openpose pytorch implementation for generating the keypoints.
@karthik1997 , use the coco-18 model from the original openpose repository.
Ok @minar09 ,
Thank you. Will try that out and post the try on results here.
One more question... Is it possible to try on for bottomwears also with this algorithm?
for different apparel than upper cloth, you can follow a similar procedure. You may need datasets and training separate models for that.
@minar09 , Thank you, so try on for bottomwear and topwear is not possible with this algorithm even if we can get the data for training the same? do we have to go for a different approach?
@karthik1997 , not sure. This is an active research area with many challenges. You can explore the latest various research works or try your own approach.
@karthik1997 from 24 keypoints, you can check what they are and try to remove 6 from 24 to get 18 keypoints. Or change the input of GMM from 22 channels to 28 channels. About bottom wear, please define the problem, what is inputs and desired ouput?
@thaithanhtuan ,
How can I generate the same output with 28 channels?
Regarding the bottomear, I've data for desired person's photo, the required topwear and required bottomwear photo. I need to try on the top and bottom wear on the desired person's image.
If I have a good amount of data like this can I achieve the Tryon results of top and bottom wear with the same code?
@minar09 @thaithanhtuan ,
getting the same error after extracting the pose with coco-18 model. Getting 24 keypoints instead of 18 keypoints with coco-18 models also.
@minar09 @thaithanhtuan ,
can you provide me the link to actual coco-18 model file?
Openpose original repo: https://github.com/CMU-Perceptual-Computing-Lab/openpose How to generate joints: https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/demo_overview.md Release: https://github.com/CMU-Perceptual-Computing-Lab/openpose/releases/tag/v1.6.0 Download models: https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/models/getModels.bat or https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/models/getModels.sh
If you have issues running with openpose, you should refer to the original repository.
Thank You
I really appreciate your work.
i am getting an error like this while testing custom images,
return F.conv2d(input, weight, self.bias, self.stride,
RuntimeError: Given groups=1, weight of size [64, 22, 4, 4], expected input[3, 28, 256, 192] to have 22 channels, but got 28 channels instead