sangyun884 / HR-VITON

Official PyTorch implementation for the paper High-Resolution Virtual Try-On with Misalignment and Occlusion-Handled Conditions (ECCV 2022).
831 stars 171 forks source link

Are training pairs meant to be mismatched? #4

Closed nihirv closed 2 years ago

nihirv commented 2 years ago

Hi,

I downloaded the dataset provided in the VITON-HD repo.

Both papers train on the piece of clothing that is already on the model.

However the provided train_pairs.txt contains mismatched human and clothing ids.

Am I misunderstanding something or should I re-create the train_pairs file?

koo616 commented 2 years ago

@nihirv As you can see from our code, the mismatch of train_pairs.txt has no effect on training :)

nihirv commented 2 years ago

@koo616 I'm not sure this is true. I ran a version with mismatched pairs and one with aligned pairs. The mismatched pairs struggle to create a correct geometry on the visualisation, and definitely fail to capture anything useful about the colours or patterns on the clothing. The results with the aligned pairs are significantly better

HITRainer commented 2 years ago

@nihirv: As you, I have retrained with paired and unpaired data, respectively. Only paired data can create a convincing result and maybe there has been a mistake in Line 43-44 in cp_dataset.py.

nihirv commented 2 years ago

@nihirv: As you, I have retrained with paired and unpaired data, respectively. Only paired data can create a convincing result and maybe there has been a mistake in Line 43-44 in cp_dataset.py.

Yeah actually - if c_names and im_names were swapped from paired and unpaired respectively. I.e.:

self.c_names['paired'] = im_names
self.c_names['unpaired'] = c_names

Then the code would execute with the correct data I believe

nihirv commented 2 years ago

@HITRainer Did you train on the original data they provided? What are your results looking like? My current results (am currently about 60k steps through training) are not as high quality as the paper samples. I'm unsure how cherry picked the samples in the paper were, and would like to know if these poor results are because of the additional data I added in to train with or if this is the model's standard behaviour

HITRainer commented 2 years ago

@nihirv I'm not sure the reason, but I trained the model with 100K iters with train_pairs_zalando.txt they provided and can not recurrent authors results. Still troubleshooting bugs.

sangyun884 commented 2 years ago

We found that there was a mismatch between train_pairs.txt we used and the one in the VITON-HD repo. We updated our code so please check now.

nihirv commented 2 years ago

@sangyun884 Thanks for the changes. I had already modified the code to be similar to what you changed - hopefully the changes help future users.

When I run it on the provided dataset it seems to be fitting well. Though when adding other data to the dataset the results aren't as promising. Seems like an issue for me to figure out though.

Could you keep this issue open for another day or 2 until I finish training?

Thank you again!

HITRainer commented 2 years ago

@nihirv Have you ever encountered that the program will get stuck when training the train_generator.py with multiple GPUs. In more detail, it will be stuck when calling loss_gen_scaled.backward() in Line 317

nihirv commented 2 years ago

Sorry for the late reply @HITRainer. I'm only training on one GPU. Are you still facing the issue?

HITRainer commented 2 years ago

@nihirv Hi, thanks for replying. I have solved that problem already.

nihirv commented 2 years ago

I'll close this issue then:)

kinivi commented 2 years ago

@nihirv Hi, thanks for replying. I have solved that problem already.

Hi @nihirv , could you please share the details of how have you solved this issue?

nihirv commented 2 years ago

@nihirv Hi, thanks for replying. I have solved that problem already.

Hi @nihirv , could you please share the details of how have you solved this issue?

@kinivi I think you meant to tag @HITRainer

kinivi commented 2 years ago

@nihirv Hi, thanks for replying. I have solved that problem already.

Hi @nihirv , could you please share the details of how have you solved this issue?

Ooops, indeed :). @HITRainer could you please help?