ZheC / Realtime_Multi-Person_Pose_Estimation

Code repo for realtime multi-person pose estimation in CVPR'17 (Oral)
Other
5.1k stars 1.37k forks source link

finetuning the model #244

Closed soans1994 closed 3 years ago

soans1994 commented 3 years ago

Dear Author,

I tried finetuning the model from the pretarined pose_iter_440000.caffemodel with the custom set of images and annotation. But the loss is very high and the infernece detection is bad. I used the follwing bash file

cohogain commented 3 years ago

HI @soans1994, did you resolve this issue? I am experiencing similar situation. Thanks

soans1994 commented 3 years ago

HI @soans1994, did you resolve this issue? I am experiencing similar situation. Thanks

Try running the all steps and train command again. Seems like some bug. Sometimes when trying the train command, the loss is empty. If you continue the training, the model output is bad.

cohogain commented 3 years ago

Thanks for your reply, I will try this. May I ask roughly how large was your dataset you used to fine-tune the model?

soans1994 commented 3 years ago

I tried less up to 200 images and model gives good results. Loss drops quickly until 50000 iteration and then slow drop of loss for further iterations

cohogain commented 3 years ago

Ok great thank you. I am finetuning with 150 images but the model seems to loose all accuracy and gives many incorrect predictions or even none. The loss also does not drop further than 100. I am not sure if I keep training or if I have preconfig incorrect.

soans1994 commented 3 years ago

Yes. Check your annotation files too. Json file should be similar to coco format, and number of key points. I didn’t do any changes.

cohogain commented 3 years ago

This also might be it, I am currently using 12 keypoints (eyes, shoulders, elbow, wrist, hips, knees). I will annotate the rest. Can I ask one final question, when annotating images, do you initialise keypoints you cannot see (covered ear / leg) or do you leave unannotated? Currently I am not initialising them but I see in coco-annotator tool you can set as NOT-VISIBLE. That is my last question, thanks for all your help!

soans1994 commented 3 years ago

I will check my annoted file and attach it tomorrow.

cohogain commented 3 years ago

Thanks you, I appreciate that!

soans1994 commented 3 years ago

I set the invisible points to zero. But i think your method is better. It also depends on your application. I use infrared images, so sometimes i annotate the keypoints even if it is not visible since the infared images are very low quality and we cannot see human parts, to get better detection of keypoints.

sleep2.txt

cohogain commented 3 years ago

Ok interesting, my data is in similar format so. I believe uninitialised keypoints are given value (1000,1000) by the program. Would you be willing to also attach the dataset of ~200 images you used to train your model with? It would be very helpful as if I cannot replicate your results an the same dataset, I can know that the issue involves my training application. Is this something you can still access? I appreciate your help so far

cohogain commented 3 years ago

OK thats no problem I will try some testing based on your info, thanks for helping me.