Closed zhongfansun closed 1 year ago
Thank you for providing your project. But I need about 6 hours to train one epoch of MLM tasks even if _per_gpu_train_batchsize is halved to 16. I ran on the single 3090. Is this normal? Look forward to your reply.
Thank you for providing your project. But I need about 6 hours to train one epoch of MLM tasks even if _per_gpu_train_batchsize is halved to 16. I ran on the single 3090. Is this normal? Look forward to your reply.