google-ai-edge / mediapipe

Cross-platform, customizable ML solutions for live and streaming media.
https://ai.google.dev/edge/mediapipe
Apache License 2.0
27.4k stars 5.15k forks source link

MediaPipe Holistic Landmarker in Python null_ptr exception #5181

Open victoriayuechen opened 8 months ago

victoriayuechen commented 8 months ago

Have I written custom code (as opposed to using a stock example script provided in MediaPipe)

None

OS Platform and Distribution

Ubuntu 22.04

Mobile device if the issue happens on mobile device

No response

Browser and version if the issue happens on browser

No response

Programming Language and version

Python

MediaPipe version

0.10.10

Bazel version

No response

Solution

Holistic

Android Studio, NDK, SDK versions (if issue is related to building in Android environment)

No response

Xcode & Tulsi version (if issue is related to building for iOS)

No response

Describe the actual behavior

Holistic landmarker 0.10.10 breaks with GPU on and off

Describe the expected behaviour

Holistic landmarker works

Standalone code/steps you may have used to try to get what you need

I was using the HandLandmarker(from tasks.vision, not legacy) for extracting the hand landmarks. Since the new version for the HolisticLandmarker was released I tried to use that too. The set up was as follows (similar to the HandLandmarker):

BaseOptions = mp.tasks.BaseOptions
VisionRunningMode = mp.tasks.vision.RunningMode

options = HolisticLandmarkerOptions(
        base_options=BaseOptions(model_asset_path='mp_models/holistic_landmarker.task', delegate=1),
        running_mode=VisionRunningMode.VIDEO)

with HolisticLandmarker.create_from_options(options) as landmarker:
    res = landmarker.detect_for_video(...)

I also couldn't find the relevant documentation for the Holistic Landmarker (0.10.10) online, so this was derived by readin the documentation and other issues. I can't get MediaPipe holistic to work with GPU delegate turned both on and off. The output for with GPU delegation is:

ERROR: Following operations are not supported by GPU delegate:
DEQUANTIZE: 
DEQUANTIZE: Operation is not supported.
108 operations will run on the GPU, and the remaining 183 operations will run on the CPU.
ERROR: TfLiteGpuDelegate Prepare: Batch size mismatch, expected 1 but got 16
ERROR: Node number 291 (TfLiteGpuDelegate) failed to prepare.
Segmentation fault

When it's off, so just CPU:

F0000 00:00:1709199284.128629    1181 packet.cc:138] Check failed: holder_ != nullptr The packet is empty.
*** Check failure stack trace: ***
    @     0x7fb3d8653d89  absl::log_internal::LogMessageFatal::~LogMessageFatal()
    @     0x7fb3d78e32a6  mediapipe::Packet::GetProtoMessageLite()
    @     0x7fb3d802b712  pybind11::cpp_function::initialize<>()::{lambda()#3}::_FUN()
    @     0x7fb3d792179d  pybind11::cpp_function::dispatcher()
    @     0x5583181be516  cfunction_call
Aborted

I didn't get these errors before when using previous modules from tasks.vision.

Thanks in advance!

DehTop commented 8 months ago

I am having the same exact issue. +1

mvazquezgts commented 8 months ago

Yes, I have the same issue. The strangest thing is the fact that when I work on my local pc I have this issue, but when I use an instance of google colab I can use the new version of holistic, why? Maybe I did something wrong ? But if it happens to more people I don't think it's my fault...

ayushgdev commented 7 months ago

Hello

Thanks for reporting the issue. We have reproduced the problem at our end as well and are working to provide resolution to the problem. While we are reproducing the issue on Mac Sonoma, @mvazquezgts and @DehTop , a humble request is if you would you please specify the OS you are finding the issue on? Just a suspicion if the issue is on Windows as well.

mvazquezgts commented 7 months ago

Hello @ayushgdev I'm using ubuntu right now

Distributor ID: Ubuntu Description: Ubuntu 22.04.4 LTS Release: 22.04 Codename: jammy

DehTop commented 7 months ago

Hello @ayushgdev I'm using ubuntu right now

Distributor ID: Ubuntu Description: Ubuntu 22.04.4 LTS Release: 22.04 Codename: jammy

hello @ayushgdev thanks for investigating this. My specs are exactly the same as @mvazquezgts .

Thanks

yiusay commented 7 months ago

@ayushgdev I am facing the same issue on Windows 11.

mvazquezgts commented 7 months ago

I think that this error occurs when in the image from which the keypoints are extracted the hands are visble or not. If there are hands, everything is correct, but if they are not visible, the error occurs.

imsamimalik commented 7 months ago

@ayushgdev Any updates on this? It is not fixed even in today's update(v0.10.11)

victoriayuechen commented 7 months ago

@ayushgdev Same here, the issue persists. The only difference is that with or without GPU the output is the same.

INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
F0000 00:00:1709911500.085149    5115 packet.cc:138] Check failed: holder_ != nullptr The packet is empty.
*** Check failure stack trace: ***
    @     0x7f5f7fc4e7a9  absl::log_internal::LogMessageFatal::~LogMessageFatal()
    @     0x7f5f7ed403d4  mediapipe::Packet::GetProtoMessageLite()
    @     0x7f5f7f5eabb2  pybind11::cpp_function::initialize<>()::{lambda()#3}::_FUN()
    @     0x7f5f7ed861ed  pybind11::cpp_function::dispatcher()
    @     0x5588b0662516  cfunction_call
Aborted
mvazquezgts commented 7 months ago

Yes, unfortunately the same issue persists. As long as both hands are present in the detection the model seems to work fine, no? Well, let's hope they solve it or give us a previous solution.

kuaashish commented 7 months ago

Hi @schmidt-sebastian,

It seems that the issue is specific to macOS and Ubuntu platforms, not affecting other platforms like Windows where GPU support is not available in Python. The Holistic Landmarker is encountering difficulties in detecting the GPU and is producing the reported errors.

Thank you!!

yiusay commented 7 months ago

@kuaashish As noted in my last reply, this problem also exists in Windows 11.

pajarft commented 6 months ago

I am facing the same issue on Jetson Orin with python. There is no problem with another Task like hand, pose or face even using GPU. Any update here?

leonarperro commented 6 months ago

+1 I'm facing the same issue on Python + Windows 10 without GPU delegation enabled (since it's still not supported on windows apparently)

pajarft commented 5 months ago

I'm facing the same issue on Jetson Orin with python even on version 0.10.14. Any update here?

Ninlawat-Puhu commented 4 months ago

I face a same issue when I use GPU for running MediaPipe.

Could you please recommend how to solve it ?

I0000 00:00:1718374127.115017  169037 gl_context_egl.cc:85] Successfully initialized EGL. Major : 1 Minor: 5
I0000 00:00:1718374127.234479  174452 gl_context.cc:357] GL version: 3.2 (OpenGL ES 3.2 NVIDIA 555.42.02), renderer: NVIDIA GeForce RTX 4090/PCIe/SSE2
W0000 00:00:1718374127.416966  174439 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1718374127.448694  174450 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1718374127.467847  174451 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1718374127.467848  174437 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1718374127.467852  174444 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1718374127.491317  174422 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1718374127.508121  174442 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1718374127.586946  174429 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
Ninlawat-Puhu commented 4 months ago

@kuaashish Could you provide solution how to fix the issue ? and it's really difficult to check the result when warning from using mediapipe notifies .

Terrywang2001 commented 3 months ago

Have I written custom code (as opposed to using a stock example script provided in MediaPipe)

None

OS Platform and Distribution

Ubuntu 22.04

Mobile device if the issue happens on mobile device

No response

Browser and version if the issue happens on browser

No response

Programming Language and version

Python

MediaPipe version

0.10.10

Bazel version

No response

Solution

Holistic

Android Studio, NDK, SDK versions (if issue is related to building in Android environment)

No response

Xcode & Tulsi version (if issue is related to building for iOS)

No response

Describe the actual behavior

Holistic landmarker 0.10.10 breaks with GPU on and off

Describe the expected behaviour

Holistic landmarker works

Standalone code/steps you may have used to try to get what you need

I was using the HandLandmarker(from tasks.vision, not legacy) for extracting the hand landmarks. Since the new version for the HolisticLandmarker was released I tried to use that too. The set up was as follows (similar to the HandLandmarker):

BaseOptions = mp.tasks.BaseOptions
VisionRunningMode = mp.tasks.vision.RunningMode

options = HolisticLandmarkerOptions(
        base_options=BaseOptions(model_asset_path='mp_models/holistic_landmarker.task', delegate=1),
        running_mode=VisionRunningMode.VIDEO)

with HolisticLandmarker.create_from_options(options) as landmarker:
    res = landmarker.detect_for_video(...)

I also couldn't find the relevant documentation for the Holistic Landmarker (0.10.10) online, so this was derived by readin the documentation and other issues. I can't get MediaPipe holistic to work with GPU delegate turned both on and off. The output for with GPU delegation is:

ERROR: Following operations are not supported by GPU delegate:
DEQUANTIZE: 
DEQUANTIZE: Operation is not supported.
108 operations will run on the GPU, and the remaining 183 operations will run on the CPU.
ERROR: TfLiteGpuDelegate Prepare: Batch size mismatch, expected 1 but got 16
ERROR: Node number 291 (TfLiteGpuDelegate) failed to prepare.
Segmentation fault

When it's off, so just CPU:

F0000 00:00:1709199284.128629    1181 packet.cc:138] Check failed: holder_ != nullptr The packet is empty.
*** Check failure stack trace: ***
    @     0x7fb3d8653d89  absl::log_internal::LogMessageFatal::~LogMessageFatal()
    @     0x7fb3d78e32a6  mediapipe::Packet::GetProtoMessageLite()
    @     0x7fb3d802b712  pybind11::cpp_function::initialize<>()::{lambda()#3}::_FUN()
    @     0x7fb3d792179d  pybind11::cpp_function::dispatcher()
    @     0x5583181be516  cfunction_call
Aborted

I didn't get these errors before when using previous modules from tasks.vision.

* Is this a bug inherent to the new implementation or am I missing something in the set-up? If so, could you please update the documentation or point to where I can find it, since I could not find it on the github pages?

* Also, I still see some bugs arising from the new Holistic Landmarker (as seen in other recent issues), what would be an estimate on when this module will be more complete?

Thanks in advance!

Hi Victoria, I am having trouble getting the holistic_landmarker.task file, could you please tell me where did you get it?

Utkarsh-shift commented 2 months ago

I face a same issue when I use GPU for running MediaPipe.

Could you please recommend how to solve it ?

I0000 00:00:1718374127.115017  169037 gl_context_egl.cc:85] Successfully initialized EGL. Major : 1 Minor: 5
I0000 00:00:1718374127.234479  174452 gl_context.cc:357] GL version: 3.2 (OpenGL ES 3.2 NVIDIA 555.42.02), renderer: NVIDIA GeForce RTX 4090/PCIe/SSE2
W0000 00:00:1718374127.416966  174439 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1718374127.448694  174450 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1718374127.467847  174451 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1718374127.467848  174437 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1718374127.467852  174444 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1718374127.491317  174422 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1718374127.508121  174442 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1718374127.586946  174429 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.

Same issue , can you tell me the solution

victoriayuechen commented 2 months ago

@Terrywang2001 So sorry for the late reply!

I don't get a notification unless you tag me :). I checked the download link of the handlandmarker and just changed the URL. In that way, you get to download the holistic landmarker.

victoriayuechen commented 2 months ago

This might not be the way they intended it but here is the link:

https://storage.googleapis.com/mediapipe-models/holistic_landmarker/holistic_landmarker/float16/latest/holistic_landmarker.task

@Terrywang2001

mosamman commented 2 months ago

I have the same issue on Mac M1 with GPU Delegate and IMAGE Running Mode

MediaPipe 0.10.14 Python 3.11.8

Check failed: holder_ != nullptr The packet is empty.
*** Check failure stack trace: ***
    @        0x10a3c0504  absl::log_internal::LogMessage::SendToLog()
    @        0x10a3bff4c  absl::log_internal::LogMessage::Flush()
    @        0x10a3c0830  absl::log_internal::LogMessageFatal::~LogMessageFatal()
    @        0x10a3c0858  absl::log_internal::LogMessageFatal::~LogMessageFatal()
    @        0x109e45238  mediapipe::Packet::GetProtoMessageLite()
    @        0x109d12a88  pybind11::cpp_function::initialize<>()::{lambda()#1}::__invoke()
    @        0x1093b82b8  pybind11::cpp_function::dispatcher()
    @        0x10351db18  cfunction_call
    @        0x1034d2e58  _PyObject_MakeTpCall
    @        0x1035b11c4  _PyEval_EvalFrameDefault
    @        0x1035b5848  _PyEval_Vector
    @        0x1034d3944  _PyVectorcall_Call
    @        0x10366ed10  partial_call
    @        0x1034d3c70  _PyObject_Call
    @        0x1035b2fd8  _PyEval_EvalFrameDefault
    @        0x1035b5848  _PyEval_Vector
    @        0x1035b2fd8  _PyEval_EvalFrameDefault
    @        0x1035b5848  _PyEval_Vector
    @        0x1034d6338  method_vectorcall
    @        0x1036692b0  thread_run
    @        0x103609b40  pythread_wrapper
    @        0x1964e2f94  _pthread_start
    @        0x1964ddd34  thread_start
returndeneb commented 2 months ago

same issue in Ubuntu 22.04.4 LTS and Windows 11

Ubuntu Mediapipe 0.10.15 Windows Mediapipe 0.10.14

Both in Conda env Python 3.12.4


F0000 00:00:1724993878.655008    7810 packet.cc:148] Check failed: holder_ != nullptr The packet is empty.
*** Check failure stack trace: ***
    @     0x754dc2b03b49  absl::log_internal::LogMessageFatal::~LogMessageFatal()
    @     0x754dc1b91458  mediapipe::Packet::GetProtoMessageLite()
    @     0x754dc247c532  pybind11::cpp_function::initialize<>()::{lambda()#3}::_FUN()
    @     0x754dc1bd6198  pybind11::cpp_function::dispatcher()
    @           0x54a9f4  cfunction_call
Aborted (core dumped)
lilmothiit commented 1 month ago

The issue occurs at various calls to packet_getter.get_proto(output_packets[_XXX_STREAM_NAME]). This happens because the task outputs no data for objects it didn't detect (like, for example, hands).

I fixed this for myself by handling empty packets for all possible landmark lists in holistic_landmarker.py : HolisticLandmarkerResult:

def _build_landmarker_result(
    output_packets: Mapping[str, packet_module.Packet]
) -> HolisticLandmarkerResult:
  """Constructs a `HolisticLandmarksDetectionResult` from output packets."""
  holistic_landmarker_result = HolisticLandmarkerResult(
      [], [], [], [], [], [], []
  )

  if not output_packets[_FACE_LANDMARKS_STREAM_NAME].is_empty():
    face_landmarks_proto_list = packet_getter.get_proto(
      output_packets[_FACE_LANDMARKS_STREAM_NAME]
    )
    face_landmarks = landmark_pb2.NormalizedLandmarkList()
    face_landmarks.MergeFrom(face_landmarks_proto_list)
    for face_landmark in face_landmarks.landmark:
      holistic_landmarker_result.face_landmarks.append(
        landmark_module.NormalizedLandmark.create_from_pb2(face_landmark)
      )

  if not output_packets[_POSE_LANDMARKS_STREAM_NAME].is_empty():
    pose_landmarks_proto_list = packet_getter.get_proto(
      output_packets[_POSE_LANDMARKS_STREAM_NAME]
    )
    pose_landmarks = landmark_pb2.NormalizedLandmarkList()
    pose_landmarks.MergeFrom(pose_landmarks_proto_list)
    for pose_landmark in pose_landmarks.landmark:
      holistic_landmarker_result.pose_landmarks.append(
        landmark_module.NormalizedLandmark.create_from_pb2(pose_landmark)
      )

  if not output_packets[_POSE_WORLD_LANDMARKS_STREAM_NAME].is_empty():
    pose_world_landmarks_proto_list = packet_getter.get_proto(
      output_packets[_POSE_WORLD_LANDMARKS_STREAM_NAME]
    )
    pose_world_landmarks = landmark_pb2.LandmarkList()
    pose_world_landmarks.MergeFrom(pose_world_landmarks_proto_list)
    for pose_world_landmark in pose_world_landmarks.landmark:
      holistic_landmarker_result.pose_world_landmarks.append(
        landmark_module.Landmark.create_from_pb2(pose_world_landmark)
      )

  if not output_packets[_LEFT_HAND_LANDMARKS_STREAM_NAME].is_empty():
    left_hand_landmarks_proto_list = packet_getter.get_proto(
      output_packets[_LEFT_HAND_LANDMARKS_STREAM_NAME]
    )
    left_hand_landmarks = landmark_pb2.NormalizedLandmarkList()
    left_hand_landmarks.MergeFrom(left_hand_landmarks_proto_list)
    for hand_landmark in left_hand_landmarks.landmark:
      holistic_landmarker_result.left_hand_landmarks.append(
        landmark_module.NormalizedLandmark.create_from_pb2(hand_landmark)
      )

  if not output_packets[_LEFT_HAND_WORLD_LANDMARKS_STREAM_NAME].is_empty():
    left_hand_world_landmarks_proto_list = packet_getter.get_proto(
      output_packets[_LEFT_HAND_WORLD_LANDMARKS_STREAM_NAME]
    )
    left_hand_world_landmarks = landmark_pb2.LandmarkList()
    left_hand_world_landmarks.MergeFrom(left_hand_world_landmarks_proto_list)
    for left_hand_world_landmark in left_hand_world_landmarks.landmark:
      holistic_landmarker_result.left_hand_world_landmarks.append(
        landmark_module.Landmark.create_from_pb2(left_hand_world_landmark)
      )

  if not output_packets[_RIGHT_HAND_LANDMARKS_STREAM_NAME].is_empty():
    right_hand_landmarks_proto_list = packet_getter.get_proto(
      output_packets[_RIGHT_HAND_LANDMARKS_STREAM_NAME]
    )
    right_hand_landmarks = landmark_pb2.NormalizedLandmarkList()
    right_hand_landmarks.MergeFrom(right_hand_landmarks_proto_list)
    for hand_landmark in right_hand_landmarks.landmark:
      holistic_landmarker_result.right_hand_landmarks.append(
        landmark_module.NormalizedLandmark.create_from_pb2(hand_landmark)
      )

  if not output_packets[_RIGHT_HAND_WORLD_LANDMARKS_STREAM_NAME].is_empty():
    right_hand_world_landmarks_proto_list = packet_getter.get_proto(
      output_packets[_RIGHT_HAND_WORLD_LANDMARKS_STREAM_NAME]
    )
    right_hand_world_landmarks = landmark_pb2.LandmarkList()
    right_hand_world_landmarks.MergeFrom(right_hand_world_landmarks_proto_list)
    for right_hand_world_landmark in right_hand_world_landmarks.landmark:
      holistic_landmarker_result.right_hand_world_landmarks.append(
        landmark_module.Landmark.create_from_pb2(right_hand_world_landmark)
      )

  if _FACE_BLENDSHAPES_STREAM_NAME in output_packets:
    face_blendshapes_proto_list = packet_getter.get_proto(
      output_packets[_FACE_BLENDSHAPES_STREAM_NAME]
    )
    face_blendshapes_classifications = classification_pb2.ClassificationList()
    face_blendshapes_classifications.MergeFrom(face_blendshapes_proto_list)
    holistic_landmarker_result.face_blendshapes = []
    for face_blendshapes in face_blendshapes_classifications.classification:
      holistic_landmarker_result.face_blendshapes.append(
        category_module.Category(
          index=face_blendshapes.index,
          score=face_blendshapes.score,
          display_name=face_blendshapes.display_name,
          category_name=face_blendshapes.label,
        )
      )

  if _POSE_SEGMENTATION_MASK_STREAM_NAME in output_packets:
    holistic_landmarker_result.segmentation_mask = packet_getter.get_image(
      output_packets[_POSE_SEGMENTATION_MASK_STREAM_NAME]
    )

  return holistic_landmarker_result

When this fix is applied and the task didn't detect some object, the HolisticLandmarkerResult will have an empty list for that object.