Closed aqsc closed 3 years ago
Can you clarify what you mean by "can't getting correct keypoint"? If you mean the results are incorrect, can you provide an example (including the original test image and the output image)?
Can you clarify what you mean by "can't getting correct keypoint"? If you mean the results are incorrect, can you provide an example (including the original test image and the output image)?
Sorry to upload the images .But we find that the results of python whl are worse than the reality testing by using the apk .
Python and Android APIs are using the same models. But, they have different inference backends (cpu vs gpu), which may cause the issue.
Python and Android APIs are using the same models. But, they have different inference backends (cpu vs gpu), which may cause the issue.
So how to test image using the whl to best match the results of app?
You may modify the MediaPipe source code to use the gpu graph. That requires you to build a mediapipe python package locally. See how to do it at https://github.com/google/mediapipe/issues/1042.
how to define file_list for iteration??
how to define file_list for iteration??
https://github.com/google/mediapipe/issues/1511#issuecomment-782221553
we use the following code to predict the hand keypoint coords after installing the python package mediapipe-0.8.2-cp36-cp36m-manylinux2014_x86_64.whl, but we cannt get correct keypoint position on testing many images . What's the problem?
import cv2 import mediapipe as mp mp_drawing = mp.solutions.drawing_utils mp_hands = mp.solutions.hands
For static images:
hands = mp_hands.Hands( static_image_mode=True, max_num_hands=2, min_detection_confidence=0.5) for idx, file in enumerate(file_list):
Read an image, flip it around y-axis for correct handedness output (see
above).
image = cv2.flip(cv2.imread(file), 1)
Convert the BGR image to RGB before processing.
results = hands.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
Print handedness and draw hand landmarks on the image.
print('Handedness:', results.multi_handedness) if not results.multi_hand_landmarks: continue image_hight, imagewidth, = image.shape annotated_image = image.copy() for hand_landmarks in results.multi_hand_landmarks: print('hand_landmarks:', hand_landmarks) print( f'Index finger tip coordinates: (', f'{hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].x image_width}, ' f'{hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].y image_hight})' ) mp_drawing.draw_landmarks( annotated_image, hand_landmarks, mp_hands.HAND_CONNECTIONS) cv2.imwrite( '/tmp/annotated_image' + str(idx) + '.png', cv2.flip(annotated_image, 1)) hands.close()