Closed szh-bash closed 10 months ago
Hello, thanks for the feedback. The training configuration has been optimized for typical GPUs with 24-32GB memory. However, we used 80GB A100 GPUs which allowed much larger batch sizes. Feel free to modify the configs as needed to fit your hardware.
Originally posted by @theEricMa in https://github.com/theEricMa/OTAvatar/issues/10#issuecomment-1537972669
I trained with 4(per GPU) 6(GPU num), 1500 iters spent exactlly 1 epoch. The pretrained model named epoch_00005_iteration_000002000, maybe you trained this model with more than 8(per GPU) 8(GPU num)?