Open RCpengnan opened 2 years ago
This alignment is done either from the detection stage or from the previous frame keypoints. It consists in calculating the center and rotation of what I call in my code comments the "rectangle" or "rotated rectangle" (actually it is a square). In host mode, the code is done in: https://github.com/geaxgx/depthai_blazepose/blob/d79e1eee8441cea981ed5f7fdb02fba83832429c/mediapipe_utils.py#L306
In edge mode, the code:
Thank you for your reply! I'd like to ask you a few questions about the skeleton adjustment. I found that some pose estimation algorithms can't detect the inverted pose at present, but blazepose can detect the inverted algorithm very well. I think this is the advantage of the alignment of the skeleton, but I don't know how to adjust the skeleton in the detection stage, do I need to detect the hips first?If you can't detect the skeleton in a inverted pose, how can you detect the key points in the hips? Since the part of skeleton alignment is not explained in detail in the paper, could you please explain how the skeleton is adjusted in the detection stage? Thank you~
I think this paragraph clarifies how the detection stage is working: https://google.github.io/mediapipe/solutions/pose.html#personpose-detection-model-blazepose-detector The pose detector is adapted from the mediapipe face detector. In addition to the face bounding box, the model infers 2 keypoints. One keypoint is an estimation of the mid hip center. The other, combined with the mid hip center keypoint, encodes the size and rotation of the whole body bounding box. It may looks like a bit of magic but the hips don't need to be visible in the image for the detection model to infer these 2 keypoints. For instance, blazepose works even on close-up face picture. It makes sense because knowing the size and orientation of the face is enough to estimate a realistic body position and orientation.
I hope I answered to your question.
Thank you for your reply, but I still have two questions to ask. 1.Hello, I would like to know how to infer these two additional key points, is it based on the method of face detector?Do I have to understand the idea of face detection if I want to understand the source of these two key points。 2.After the posture is aligned, the skeleton needs to be mapped back to the original posture when drawing the skeleton. I don't know if I understand correctly. If yes, I would like to ask where the code of this part is. Thank you~
The detection model outputs directly the face bounding box and the 2 keypoints. In host mode, the parsing of the detection model is done by this function : https://github.com/geaxgx/depthai_blazepose/blob/d79e1eee8441cea981ed5f7fdb02fba83832429c/mediapipe_utils.py#L181 More precisely, the model ouputs an array of 12 floats for each detected body (896 bodies max):
Yes. The landmark regression model yields coordinates in the square rotated body bounding box, so we need to map them back in the image coordinate system if we want to draw the skeleton. In host mode, this is done here: https://github.com/geaxgx/depthai_blazepose/blob/d79e1eee8441cea981ed5f7fdb02fba83832429c/BlazeposeDepthai.py#L511-L522 https://github.com/geaxgx/depthai_blazepose/blob/d79e1eee8441cea981ed5f7fdb02fba83832429c/BlazeposeDepthai.py#L542 https://github.com/geaxgx/depthai_blazepose/blob/d79e1eee8441cea981ed5f7fdb02fba83832429c/BlazeposeDepthai.py#L549-L550 https://github.com/geaxgx/depthai_blazepose/blob/d79e1eee8441cea981ed5f7fdb02fba83832429c/BlazeposeDepthai.py#L553-L554
Thank you for your reply! Hello, I have looked up a lot of materials and only explained these two key points of model inference, but did not explain in detail how to infer. The code is a little complicated and I don't understand it very well. Could you please help me explain the idea of getting these two key points?
The idea of getting these two keypoints is exactly what I said in my previous message. I don't know how to explain it differently. The 2 keypoints are, among other things, inferred by the detection model. The decode_bboxes() function is just processing the model output to store the information in an instance of a Body class. At the end, the 1st keypoint (mid hip center) is stored in Body.pd_kps[0] and the second keypoint is stored in Body.pd_kps[1] as normalized coordinates (between 0 and 1).
Hi there! I have a question somewhat relevant to this discussion. In this tutorial, https://google.github.io/mediapipe/solutions/pose#python-solution-api, you can directly obtain the pose using mediapipe.solutions.pose by directly passing in the image. However in your implementation, you're keeping the pose and landmark detection as 2 separate steps in the pipeline, basically re-implementing feature from mediapipe. May I ask what's the reason behind this?
@tristanle22 Thanks to the mediapipe API, it may seems to the user that the pose estimation is done in one step, but behind the scene, it is actually a 2 steps process as explained there: https://google.github.io/mediapipe/solutions/pose#ml-pipeline
The solution utilizes a two-step detector-tracker ML pipeline, proven to be effective in our MediaPipe Hands and MediaPipe Face Mesh solutions. Using a detector, the pipeline first locates the person/pose region-of-interest (ROI) within the frame. The tracker subsequently predicts the pose landmarks and segmentation mask within the ROI using the ROI-cropped frame as input.
For this repo, I took direct inspiration from the mediapipe implementation, but just adapted it to the depthai library.
The paper says:"we align the person so that the point between the hips is loceated at the center of the square image passed as the neural network input". Can you tell me where this part of the code is?