sangyun884 / HR-VITON

Official PyTorch implementation for the paper High-Resolution Virtual Try-On with Misalignment and Occlusion-Handled Conditions (ECCV 2022).
831 stars 171 forks source link

How to train the Condition Generator with multi-gpus? #5

Closed xiezhy6 closed 2 years ago

xiezhy6 commented 2 years ago

Hi,

Thanks for releasing the training code.

I would like to train the condition generator with another dataset (with a resolution of 512 x 384). However, running the complete 300000 steps under the default setting takes a long time (> 130h). So I would like to ask whether the author plans to release the multi-GPU version of the training code. Or is there any suggestion about how to train the condition generator within 1~2 days?

koo616 commented 2 years ago

@xiezhy6 We don't plan to release the multi-GPU training code yet. If you use DataParallel in pytorch library, you can do multi-GPU training easily :)

vtddggg commented 1 year ago

@xiezhy6 @koo616 I also want to modify the train_condition.py into a DataParallel version. Although directly warpping the ConditionGenerator and Discriminator with DataParallelWithCallback is easy, it is unknown if this could make training unstable and harm the final performance.

Can you give me some advices? I will train condition generator with DataParallel and share my results here. It is also welcomed if you already have some results with DataParallel experiment. Thanks!!