THUDM / Inf-DiT

Official implementation of Inf-DiT: Upsampling Any-Resolution Image with Memory-Efficient Diffusion Transformer
Apache License 2.0
378 stars 19 forks source link

about training device #9

Open zhaozhaoooo opened 5 months ago

zhaozhaoooo commented 5 months ago

when you train the model, which gpu and how many gpus do you use? and what's the training time?

yzy-thu commented 5 months ago

about 2k H800*days

Ree1s commented 5 months ago

Hi, may I know the batch size per GPU?

yzy-thu commented 5 months ago

Hi, may I know the batch size per GPU?

1

zimenglan-sysu-512 commented 5 months ago

what about eight 32G V100 GPUs to train?