THUDM / Inf-DiT

Official implementation of Inf-DiT: Upsampling Any-Resolution Image with Memory-Efficient Diffusion Transformer
Apache License 2.0
289 stars 12 forks source link

about training device #9

Open zhaozhaoooo opened 3 weeks ago

zhaozhaoooo commented 3 weeks ago

when you train the model, which gpu and how many gpus do you use? and what's the training time?

yzy-thu commented 3 weeks ago

about 2k H800*days

Ree1s commented 2 weeks ago

Hi, may I know the batch size per GPU?

yzy-thu commented 2 weeks ago

Hi, may I know the batch size per GPU?

1

zimenglan-sysu-512 commented 1 week ago

what about eight 32G V100 GPUs to train?