cuiaiyu / dressing-in-order

(ICCV'21) Official code of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing" by Aiyu Cui, Daniel McKee and Svetlana Lazebnik
https://cuiaiyu.github.io/dressing-in-order
Other
513 stars 127 forks source link

About pose keypoints #21

Closed jerry940080 closed 2 years ago

jerry940080 commented 2 years ago

can you provide the datails of getting your dataset's pose keypoints?

cuiaiyu commented 2 years ago

Option 1: Follow Get Started -> Data Preparation -> Optionally, you can also generate these files by yourself. in PATN Repository to get the key points label.

Option 2: For a test imagexxx.jpg, run openpose to get the keypoints file xxx_keypoints.json (Body_25 label), and then you can load the key points as pytorch Tensor for the single test image by function

def load_pose_from_json(pose_json, target_size=(256,256), orig_size=(256,256)):
    '''
    This function converts the OpenPose detected key points (in .json file) to the desired heatmap.
    input: 
    - pose_json (str): the file_path of the OpenPose detection in .json.
    - target_size (tuple): the size of output heatmap
    - orig_size (tuple): the size of original image that is used for OpenPose to detect the key points.
    Output:
    - heatmap (torch.Tensor) : the heatmap in size 18xHxW as specified by target_size 
    '''
    with open(pose_json, 'r') as f:
        anno = json.load(f)
    if len(anno['people']) < 1:
        a,b = target_size
        return torch.zeros((18,a,b))
    anno = list(anno['people'][0]['pose_keypoints_2d'])
    x = np.array(anno[1::3])
    y = np.array(anno[::3])

    x[8:-1] = x[9:]
    y = np.array(anno[::3])
    y[8:-1] = y[9:]
    x[x==0] = -1
    y[y==0] = -1
    coord = np.concatenate([x[:,None], y[:,None]], -1)
    pose  = pose_utils.cords_to_map(coord, target_size, orig_size)
    pose = np.transpose(pose,(2, 0, 1))
    pose = torch.Tensor(pose)
    return pose[:18]
Djvnit commented 2 years ago

Hi @cuiaiyu , Thanks a lot for this great work. I'm working on Cloth Virtual Try-On as my Final Year Project. The demo worked fine for me but currently I'm facing some issues in performing virtual try-on on my own image. Steps I followed:

  1. Resized by full size image .jpg to 750x1101 pixels (as all the images in test folder are of this dimension) and added it to test folder.

  2. Ran openpose on the image and obtained the keypoints in .json file, manually separated x and y keypoints as (x0, y0, c0, x1, y1, c1, ....) and added the file name along with 2D_pose_keypoint y and x keypoints respectively in fasion-annotation-test.csv .

  3. Using SCHP found the human parsing and added it to testM_lip.

  4. Added image name in test.lip and standard_test_anns.txt under print just for testing.

  5. After that I just ran the demo.ipynb and got the following error in data loading step. image

I tried a lot to resolve this error but I'm unable to get it also I'm approaching the deadline. Kindly help me to test the model on custom image.

Also I'm unable to understand the use of fasion-pairs-test.csv while running demo.

Hopeful for your kind reply. Thanks a lot Cuiaiyu !!!