Closed YSLDTZY closed 10 months ago
Sorry for the delayed reply. The total batch size of Group Pose is 16 (8 GPU cards $\times$ 2 images on each card or 16 GPU $\times$ 1 image). The model is sensitive to learning rate and batch size. To align with previous methods, we have not tested our models with smaller batch sizes before. Maybe you can ref this issue https://github.com/Michel-liu/GroupPose/issues/8#issuecomment-1784553214, if enough GPUs are not available.
Dear author, when I tried to reproduce this article, I found that the loss cannot quickly decrease and even the model cannot converge based on lr=0.0001 in the config file. May I ask if the author made a mistake in writing the learning rate? Secondly, the batch written by the author in the article_ The size is 16, and it is 2 in the config file. May I ask if there was an error in writing from the fig file?