fnzhan / UNITE

[CVPR 2022 Oral] Marginal Correspondence for Conditional Image Generation, [CVPR 2021] Unbalanced Feature Transport for Exemplar-based Image Translation
192 stars 26 forks source link

Training is very slow, is that normal? #8

Closed 22TonyFStark closed 2 years ago

22TonyFStark commented 2 years ago

Hi! I'm training UNITE using 4 3090 GPUs with the following settings: python3 train.py \ --name test \ --dataset_mode my_custom \ --dataroot 'train/' \ --correspondence 'ot' \ --display_freq 500 \ --niter 25 \ --niter_decay 25 \ --maskmix \ --use_attention \ --warp_mask_losstype direct \ --weight_mask 100.0 \ --PONO \ --PONO_C \ --use_coordconv \ --adaptor_nonlocal \ --ctx_w 1.0 \ --gpu_ids 0,1,2,3 \ --batchSize 8 \ --label_nc 29 \ --ndf 64 \ --ngf 64 \ --mcl \ --nce_w 1.0 \ Yet it seems that the speed is extremely slow, when I print some message each iter like this: for i, data_i in enumerate(dataloader, start=iter_counter.epoch_iter): print("iter", I) And it turns out that each iteration takes about 3 seconds, which maybe abnormally slow. I have trained CoCosNetv1 with 16 batch_size, and it performs well. Maybe I doing something wrong? Could you give me some advice? Thanks!