lbc12345 / SeD

Semantic-Aware Discriminator for Image Super-Resolution
94 stars 5 forks source link

About training duration #6

Open Petrichor214 opened 3 months ago

Petrichor214 commented 3 months ago

Thank you very much for your code sharing, which is very detailed and specific!

How many cards did you use for training and how long did it take.

I seem to need a lot of training time.

train

lbc12345 commented 3 months ago

Hi, Thank you for your interest in our work! We use four 16G V100 GPUs to train our model for about 35 hours. I checked my training log, the time between each logs are about 80s. It seems that your time is much longer.

YunYunY commented 2 months ago

Dear authors, Thanks for the detailed training info. I use a single GPU with batch size = 32. But the time between every log is around 3 minutes. I checked there is no CPU overload issue. Could you please verify that 80s is measured with the following training config?

python train.py --opt options/train_rrdb_P+SeD.yml --resume pretrained/RRDB.pth

Thank you very much.

lbc12345 commented 2 months ago

We use 4 16G V100 GPUs. If you use single GPU to train the model, this time duration is normal.