Open soonhyo opened 8 months ago
Hi @soonhyo ,
It looks like you are trying to modify our Task API code using the Selfie Multiclass tflite model. We haven't tested this yet, we will be unable to offer support in this case. If you are following our documentation from here, please let us know the specific guide or instructions you are using so we can better understand and address the issue.
Thank you
Thank you for your reply.
I have referred to the live stream example for Python in the MediaPipe Image Segmentation documentation. Based on the discussion in this #4984, I added delegate=python.BaseOptions.Delegate.GPU to the code. There are no significant other modifications made to the example code.
It's worth noting that the same code worked with GPU support for the hair_segmenter.tflite model. Additionally, when I converted the selfie_multiclass_256x256.tflite model to an ONNX model, it operated successfully with GPU support.
Are you able to use the model in MediaPipe Studio? https://mediapipe-studio.webapps.google.com/studio/demo/image_segmenter
You can select the same model here (multi-class segmentation) and GPU as the delegate.
Thank you for your reply, @schmidt-sebastian. Yes, it worked normally in MediaPipe Studio, but it has not worked in the Python code yet.
Same on Android 11T (21081111RG) works well with CPU delegate but catch "Batch size mismatch" error when modify to GPU. I use this device and use mediapipe studio demo, when selecting Multi-class Selfie Segmenter 256
and GPU
it seems to have the same error, I think the problem is the GPU model on this device but when testing other models like Selfie Segmenter
, Hair Segmenter
and Deeplab V3
with the GPU delegate still working as expected
Log Details:
E Image segmenter failed to load model with error: invalid argument: CalculatorGraph::Run() failed in Run:
Calculator::Open() for node "mediapipe_tasks_vision_image_segmenter_imagesegmentergraph__mediapipe_tasks_core_inferencesubgraph__inferencecalculator__mediapipe_tasks_vision_image_segmenter_imagesegmentergraph__mediapipe_tasks_core_inferencesubgraph__InferenceCalculator" failed: Batch size mismatch, expected 1 but got 8
Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
Yes
OS Platform and Distribution
Linux Ubuntu 20.04
MediaPipe Tasks SDK version
0.10.9
Task name (e.g. Image classification, Gesture recognition etc.)
selfie multicalss segemetation
Programming Language and version (e.g. C++, Python, Java)
Python
Describe the actual behavior
I am encountering a "Batch size mismatch" error when attempting to use the Selfie Multiclass model with GPU Delegate in MediaPipe. The error message is as follows:WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1705492784.934668 18558 gl_context_egl.cc:85] Successfully initialized EGL. Major : 1 Minor: 5 I0000 00:00:1705492785.005127 18620 gl_context.cc:344] GL version: 3.2 (OpenGL ES 3.2 NVIDIA 525.147.05), renderer: NVIDIA RTX A2000 Laptop GPU/PCIe/SSE2 INFO: Created TensorFlow Lite delegate for GPU. ERROR: TfLiteGpuDelegate Prepare: Batch size mismatch, expected 1 but got 8 ERROR: Node number 175 (TfLiteGpuDelegate) failed to prepare. ERROR: Restored original execution plan after delegate application failure. E0000 00:00:1705492785.049712 18558 calculator_graph.cc:876] INTERNAL: CalculatorGraph::Run() failed: Calculator::Open() for node "mediapipe_tasks_vision_image_segmenter_imagesegmentergraphmediapipe_tasks_core_inferencesubgraphinferencecalculatormediapipe_tasks_vision_image_segmenter_imagesegmentergraphmediapipe_tasks_core_inferencesubgraphInferenceCalculator" failed: ; RET_CHECK failure (mediapipe/calculators/tensor/inference_calculatorgl.cc:194) (interpreter->ModifyGraphWithDelegate(delegate_.get()))==(kTfLiteOk) Traceback (most recent call last): File "mp_head.py", line 156, in ros_app = RosApp() File "mp_head.py", line 106, in init super(RosApp, self).init() File "mp_head.py", line 60, in init self.segmenter = ImageSegmenter.create_from_options(self.options) File "/home/s-kim/.local/lib/python3.8/site-packages/mediapipe/tasks/python/vision/image_segmenter.py", line 268, in create_from_options return cls( File "/home/s-kim/.local/lib/python3.8/site-packages/mediapipe/tasks/python/vision/image_segmenter.py", line 145, in init super(ImageSegmenter, self).init( File "/home/s-kim/.local/lib/python3.8/site-packages/mediapipe/tasks/python/vision/core/base_vision_task_api.py", line 70, in init self._runner = _TaskRunner.create(graph_config, packet_callback) RuntimeError: CalculatorGraph::Run() failed: Calculator::Open() for node "mediapipe_tasks_vision_image_segmenter_imagesegmentergraph__mediapipe_tasks_core_inferencesubgraphinferencecalculatormediapipe_tasks_vision_image_segmenter_imagesegmentergraph__mediapipe_tasks_core_inferencesubgraphInferenceCalculator" failed: ; RET_CHECK failure (mediapipe/calculators/tensor/inference_calculatorgl.cc:194) (interpreter->ModifyGraphWithDelegate(delegate_.get()))==(kTfLiteOk)
Describe the expected behaviour
I expected that when running the Selfie Multiclass model with GPU Delegate, the batch size would match the expected value, and no error would occur.
Standalone code/steps you may have used to try to get what you need
Other info / Complete Logs
No response