Closed Javacr closed 11 months ago
Greetings, I'm not able to reproduce this issue. Modifying the notebook with the following lines, the model continues to work as expected.
cv_img = cv2.imread("image.jpg")[:, :, ::-1].copy()
cv_img = cv2.cvtColor(cv_img, cv2.COLOR_RGB2GRAY)
cv_img = cv2.cvtColor(cv_img, cv2.COLOR_GRAY2RGB)
image = mp.Image(image_format=mp.ImageFormat.SRGB, data=cv_img)
Greetings, I'm not able to reproduce this issue. Modifying the notebook with the following lines, the model continues to work as expected.
cv_img = cv2.imread("image.jpg")[:, :, ::-1].copy() cv_img = cv2.cvtColor(cv_img, cv2.COLOR_RGB2GRAY) cv_img = cv2.cvtColor(cv_img, cv2.COLOR_GRAY2RGB) image = mp.Image(image_format=mp.ImageFormat.SRGB, data=cv_img)
Thank you! I sloved this problem, the reason is that the hand is too small in image, palm detector can not detect successfully. when I crop the image, everything is ok!
@Javacr
Thank you for your confirmation, Please let us know if now this is no longer issue from your end, Can we move the status to resolved and close the issue.
This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you.
Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
None
OS Platform and Distribution
windows 10
MediaPipe Tasks SDK version
0.10.3
Task name (e.g. Image classification, Gesture recognition etc.)
hand tracking
Programming Language and version (e.g. C++, Python, Java)
python
Describe the actual behavior
I use cv2.cvtColor convert rgb image to graysacle and give it three channel by GRAY2BG, hand tracking outputs nothing with this image. RGB image can work well.
Describe the expected behaviour
Standalone code/steps you may have used to try to get what you need
I follow the example: https://colab.research.google.com/github/googlesamples/mediapipe/blob/main/examples/hand_landmarker/python/hand_landmarker.ipynb
Other info / Complete Logs
No response