Open soans1994 opened 3 years ago
Hi @soans1994
This project does not take occlude into consideration. The model will always predict full marks with maximum possibility, even for a random image.
I think you will need a whole new model and dataset to separate these marks.
@yinguobing thank you. i see, thats why the accuracy is very good. does the TensorFlow pose estimation work in the similar way? when i try the tensorflow model trained for coco dataset, it cannot detect occluded parts. how is this different from the tf pose estimation. can you give me some advice.
I'm a little confused. Could you provide the model's link so I can better understanding your question.
@yinguobing
this is the human pose implementation https://github.com/CMU-Perceptual-Computing-Lab/openpose. i checked their video on face detection here https://www.youtube.com/watch?v=C1Sxk6zxWLM you can notice that when the person covers his face, the the eyes will be in the backside and only other visible keypoints are detected other than the eyes. also for example, when i run this model for side view faces, it will still detect as frontal face keypoints. is it because of weak dataset which do not have enough side view images?
I'm not sure. To understand how OpenPose process occluded facial part will demand reading and understanding their papers and code.
For this repo, you are right about the dataset. Most samples in the dataset have frontal faces which made the dataset highly unbalanced. You can use this module to draw the distribution.
This situation could be improved by re-balance the data, or weighted loss functions. This is an advanced topic and I think you can find handful of papers on this.
@yinguobing thank you for your valuable information. i will check it out.
hello author,
when i tested your weights, it gives good results, but even when some face parts or not visible, the model still predicts output(for example when i cover my eyes or nose) why is that, is it because of the dataset. how can i overcome this problem?
thank you