Mutoy-choi / Tryondiffusion

54 stars 2 forks source link

Excuse me, how to generate landmarks in Jp and Jg ? #2

Closed chunxia75qin closed 1 year ago

chunxia75qin commented 1 year ago

Excuse me, how to generate landmarks in Jp and Jg ?

Mutoy-choi commented 1 year ago

I used AI-HUB data which already have pose embedding but you can use openpose to get pose embedding. use head, shoulders, wrists further i will update using densepose and openpose for pose embedding (Jp,Jg)

chunxia75qin commented 1 year ago

I used AI-HUB data which already have pose embedding but you can use openpose to get pose embedding. use head, shoulders, wrists further i will update using densepose and openpose for pose embedding (Jp,Jg)

I'm curious why not just use coordinates,according to the paper author just use 2D pose keypoints

Mutoy-choi commented 1 year ago

I used AI-HUB data which already have pose embedding but you can use openpose to get pose embedding. use head, shoulders, wrists further i will update using densepose and openpose for pose embedding (Jp,Jg)

I'm curious why not just use coordinates,according to the paper author just use 2D pose keypoints

The reason I suggested using pose embeddings from AI-HUB data is due to its difference from the Vition dataset. While the paper might have used 2D pose keypoints, in practical applications, especially when dealing with new images, using densepose and openpose can provide more detailed and accurate pose embeddings. This is particularly beneficial when we want to capture more intricate details of the pose, which might not be possible with just 2D keypoints.

chunxia75qin commented 1 year ago

thanks a lot. I want to use VITON-HD dataset for training, I found the person image provided the openpose, just need to train a mdol to get embeddings, but the cloth image does not provide openpose, so I plan to use openpose repo provided by CMU-Perceptual-Computing-Lab to get openpose. by the way, I find a mistake in your code, when you pass parameters model1(combined_img, person_pose, garment_pose, ic_img) to forward(self, x, gar_emb, pose_emb, seg_garment) in ParallelUNet.py

KeyaoZhao commented 1 year ago

thanks a lot. I want to use VITON-HD dataset for training, I found the person image provided the openpose, just need to train a mdol to get embeddings, but the cloth image does not provide openpose, so I plan to use openpose repo provided by CMU-Perceptual-Computing-Lab to get openpose. by the way, I find a mistake in your code, when you pass parameters model1(combined_img, person_pose, garment_pose, ic_img) to forward(self, x, gar_emb, pose_emb, seg_garment) in ParallelUNet.py

Hello! I also want to use the VITON-HD dataset for training. I wonder do you train the model successfully? And how do you get the cloth image openpose? If you can share the preprocessing precedure, I would be very grateful!