Closed Jeff-Fudan closed 7 months ago
Probably you should consider increasing ur num_workers to 8 or more, the default is 1. I don't know too much about how to accelerate the training since this is implemented by myself using pytorch dataloader instead of original internal pkg of ByteDance.
Probably you should consider increasing ur num_workers to 8 or more, the default is 1. I don't know too much about how to accelerate the training since this is implemented by myself using pytorch dataloader instead of original internal pkg of ByteDance.
Well, I have set the num_workers to 8 for both MagicPose and when running Moore's open-source code. However, the speed of MagicPose is noticeably slower compared to Moore's code.
During the training process with MagicPose, the GPU frequently experiences waiting time for batch loading.