Open yq1011 opened 7 years ago
With the large training dataset, the model convergence is slow. We use two Titanx (old version) and train for six days. You can change the batch size from 10 to 8 to speed it up a little bit. We did not try a batch size below 8. I will plot the loss with respect to iterations and post here.
Thanks a lot!
I deleted the wrong figure posted before, this will be updated soon.
Hi, is this the loss of L1 or L2?
hi, any updates? :D
@ZheC Can you post the loss vs iterations curve again?
@yq1011 what's your final result? Was it converge? How long did you train for it? I am training this model for days, and it's not so fast as ZheC said, I use 4 GPUs trained for 2 days , and it's about 2w iterations...
I think we should talk in terms of epochs instead (the training is printing that). @ZheC when you mentioned that you were using 2 x Titan X with batch size of 10 that probably meant that the actual batch size was 20 (10 per gpu) ?
I have the same problem as you @yq1011 . Did you get a converged loss finally?
@ZheC Hi, would you be able to post the loss curve again? I would like to compare with the model when I train it locally to make sure it is performing comparably. Thanks!
@ZheC Can you post the loss curve?
@ZheC at least you can tell the last loss value so that we can compare.
Hi all, I am really sorry for my late response. I graduated from CMU so it is not easy to access the old files again. But I plot the loss for the two levels here:
All the terminal output is: https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation/blob/master/training/example_loss/output.txt
The code to plot the loss is here: https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation/blob/master/training/example_loss/plotLoss.sh
@ZheC Thanks for sharing!!
@ZheC Hi, I set the parameters based on your terminal output and train, but it is terminated about iteration 1200 without any log shown on the screen. Do you have any idea about this? Thanks.
Hi,@ZheC,thanks for your great work! I have some questions about train the pose model. In your https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation/blob/master/training/example_loss/Loss_l1.png
https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation/blob/master/training/example_loss/Loss_l2.png ,But there was only the result about iterations of # 25,0000. However, in the Openpose https://github.com/CMU-Perceptual-Computing-Lab/openpose the https://github.com/CMU-Perceptual-Computing-Lab/openpose/tree/master/models/pose/coco the model was shown is poseiter# 440000.caffemodel, I would like to know whether it is only 250000 iterations or 440000 iterations. Thanks for you attentions.
Are there someone getting the same results in the paper , and trying to train the small model only 2-stages used?
@yq1011 Did you get the same results as the paper?Thanks
@Ai-is-light I use 440000 iterations' model. I pick up the best iteration based on the evaluation score on a validation set. I keep testing the accuracy of the trained models at different iteration. The best iteration is not fixed for different models. So I think probably you want to follow the same way to pick up your trained model.
Hi @ZheC , I have a question that how much the effect the number of images being used in the evaluation have to the evaluation score. I saw on your paper you chosen 1160 images randomly. Why not choose the whole validation set?
Hello, How to choose the stepsize for a given iteration?
The default parametrs from the author is step size=13106 for iterations=600000
Thank You
Hi,
Can you share how long does it take for you to train on what GPU? And what should the final loss be?
After 2 days' training on K80, it just trained for 17100 iters and the loss is still in the range of 500-1000. Is this right?
Thanks