This is the official implementation of the work presented at CVPR 2024, titled Multiple View Geometry Transformers for 3D Human Pose Estimation (MVGFormer).
Hi, can the authors share the details of the GPU on which MVGFormer was trained and how long was the training time. Also, was the best model trained for 100 epochs?
Hi, we trained on 8 V100 GPUs, each with a batch size of 1, totaling a batch size of 8. The training process takes about a day. The best model was trained for 40 epochs.
Hi, can the authors share the details of the GPU on which MVGFormer was trained and how long was the training time. Also, was the best model trained for 100 epochs?