Lakonik / SSDNeRF

[ICCV 2023] Single-Stage Diffusion NeRF
https://lakonik.github.io/ssdnerf/
MIT License
438 stars 24 forks source link

multi-training #11

Open Hu-Yuch opened 1 year ago

Hu-Yuch commented 1 year ago

Why using your code multi graphics card training is as fast as single-card training (2x3090)?

Lakonik commented 1 year ago

Hi, i'm not sure what your question is. Isn't it natural that multi-GPU DDP training costs around the same time per iteration as single GPU training?

Hu-Yuch commented 1 year ago

n

Yes,The eta of multi-GPU is the same as single gpu for 17 days

Lakonik commented 1 year ago

In the config we set the total number of training iterations, so changing the number of GPUs will only affect the total number of epochs but not the training time.

And btw, the initial eta is an unreliable overestimate.

Hu-Yuch commented 1 year ago

I see.that means if i change the total_iters to be half in multi-GPU training,we can get the same result as single-GPU training?I remember only 6 days for multi-GPU training in your paper

Lakonik commented 1 year ago

There's no need to change the schedule, the 17 days eta is simply wrong.

Hu-Yuch commented 1 year ago

okay,I see.

Hu-Yuch commented 1 year ago

If i use 4-gpus,can i change the total_iters to be half(500k) to get the simlar result as 1000k with 2-gpus?