google-ai-edge / mediapipe

Cross-platform, customizable ML solutions for live and streaming media.
https://ai.google.dev/edge/mediapipe
Apache License 2.0
26.95k stars 5.1k forks source link

Mediapipe CodePens don't run on iOS Safari #1427

Closed ionif closed 3 years ago

ionif commented 3 years ago

Hello all,

I have a project using Mediapipe Hands on iOS and I've been trying to update from the tfjs model to the new Mediapipe api but even when I enable WebGL2, it still fails to work. I've made sure I'm asking permission using navigator.getmedia properly. Wondering if anyone has any ideas on what's going wrong.

Here's the codepen that I'm testing: https://codepen.io/aionkov/pen/MWjEqWa

Here's the console:

[Warning] I1223 11:05:16.032000 1 gl_context_webgl.cc:146] Successfully created a WebGL context with major version 3 and handle 3 (hands_solution_wasm_bin.js, line 9) [Warning] I1223 11:05:16.034000 1 gl_context.cc:340] GL version: 3.0 (OpenGL ES 3.0 (WebGL 2.0)) (hands_solution_wasm_bin.js, line 9) [Warning] W1223 11:05:16.034000 1 gl_context.cc:794] Drishti OpenGL error checking is disabled (hands_solution_wasm_bin.js, line 9) [Warning] E1223 11:05:16.711000 1 calculator_graph.cc:775] INTERNAL: CalculatorGraph::Run() failed in Run: (hands_solution_wasm_bin.js, line 9) [Warning] Calculator::Open() for node "handlandmarktrackinggpuhandlandmarkgpuInferenceCalculator" failed: [GL_INVALID_FRAMEBUFFER_OPERATION]: The framebuffer object is not complete.: glCreateShader in third_party/tensorflow/lite/delegates/gpu/gl/gl_shader.cc:50 [type.googleapis.com/mediapipe.StatusList='\n\x84\x02\x08\r\x12\xff\x01\x43\x61lculator::Open() for node \"handlandmarktrackinggpuhandlandmarkgpuInferenceCalculator\" failed: [GL_INVALID_FRAMEBUFFER_OPERATION]: The framebuffer object is not complete.: glCreateShader in third_party/tensorflow/lite/delegates/gpu/gl/gl_shader.cc:50'] (hands_solution_wasm_bin.js, line 9) [Warning] F1223 11:05:16.712000 1 solutionswasm.embind.cc:585] Check failed: ::util::OkStatus() == (graph->WaitUntilIdle()) (OK vs. INTERNAL: CalculatorGraph::Run() failed in Run: (hands_solution_wasm_bin.js, line 9) [Warning] Calculator::Open() for node "handlandmarktrackinggpuhandlandmarkgpuInferenceCalculator" failed: [GL_INVALID_FRAMEBUFFER_OPERATION]: The framebuffer object is not complete.: glCreateShader in third_party/tensorflow/lite/delegates/gpu/gl/gl_shader.cc:50 [type.googleapis.com/mediapipe.StatusList='\n\x84\x02\x08\r\x12\xff\x01\x43\x61lculator::Open() for node \"handlandmarktrackinggpuhandlandmarkgpuInferenceCalculator\" failed: [GL_INVALID_FRAMEBUFFER_OPERATION]: The framebuffer object is not complete.: glCreateShader in third_party/tensorflow/lite/delegates/gpu/gl/gl_shader.cc:50']) (hands_solution_wasm_bin.js, line 9) [Warning] Check failure stack trace: (hands_solution_wasm_bin.js, line 9) [Warning] undefined (hands_solution_wasm_bin.js, line 9) [Error] Unhandled Promise Rejection: RuntimeError: abort(undefined) at jsStackTrace@https://cdn.jsdelivr.net/npm/@mediapipe/hands@0.1/hands_solution_wasm_bin.js:9:67558 stackTrace@https://cdn.jsdelivr.net/npm/@mediapipe/hands@0.1/hands_solution_wasm_bin.js:9:67737 abort@https://cdn.jsdelivr.net/npm/@mediapipe/hands@0.1/hands_solution_wasm_bin.js:9:41049 _abort@https://cdn.jsdelivr.net/npm/@mediapipe/hands@0.1/hands_solution_wasm_bin.js:9:179948 wasm-stub@[wasm code] <?>.wasm-function[10471]@[wasm code] <?>.wasm-function[10466]@[wasm code] <?>.wasm-function[10461]@[wasm code] <?>.wasm-function[10458]@[wasm code] <?>.wasm-function[10474]@[wasm code] <?>.wasm-function[515]@[wasm code] <?>.wasm-function[502]@[wasm code] wasm-stub@[wasm code] [native code] SolutionWasm$send https://cdn.jsdelivr.net/npm/@mediapipe/hands@0.1/hands.js:33:352 Q@https://cdn.jsdelivr.net/npm/@mediapipe/hands@0.1/hands.js:10:295 https://cdn.jsdelivr.net/npm/@mediapipe/hands@0.1/hands.js:11:90 k@https://cdn.jsdelivr.net/npm/@mediapipe/hands@0.1/hands.js:22:322 promiseReactionJob@[native code] (evaluating 'new WebAssembly.RuntimeError(what)') (anonymous function) (hands_solution_wasm_bin.js:9:41099) promiseReactionJob

tyrmullen commented 3 years ago

Looked into the selfie-segmentation issue a bit, and found what I believe to be some texture memory handling issues. I don't have the bandwidth to fully fix the underlying problems at the moment, but I created a patch to at least allow selfie-segmentation to run on iOS (tested successfully on my old iPhone6S). We'll try to get this integrated into the JS Solutions codebase as soon as possible.

sanamumtaz commented 3 years ago

Looked into the selfie-segmentation issue a bit, and found what I believe to be some texture memory handling issues. I don't have the bandwidth to fully fix the underlying problems at the moment, but I created a patch to at least allow selfie-segmentation to run on iOS (tested successfully on my old iPhone6S). We'll try to get this integrated into the JS Solutions codebase as soon as possible.

Kindly update about the current status.

tyrmullen commented 3 years ago

The team is still discussing how best to conditionally apply the patch. Will update again when I have something more substantive.

tyrmullen commented 3 years ago

The team desired a specific form for the patch; I have now refactored everything accordingly and submitted the patch. The next update to the selfie-segmentation JS Solution will contain the fix. All other iOS Solutions will also have slightly altered behavior with their next updates, but the impact there will hopefully be minimal/negligible.

Also, it should be noted this issue was NOT iOS-specific, but rather occurred due to using the selfie-segmentation module with CPU ML inference. So in case it helps other people running into similar odd selfie-segmentation issues, the workaround patches were all just different ways to force the TensorsToSegmentationCalculator to use the "use_gpu=true" path despite ML being performed on the CPU (and thus having mediapipe::Tensor objects with only CPU backing by default).

tyrmullen commented 3 years ago

On an unrelated note, @jadams777: do you mind trimming the console output from your last message a bit (or replacing it with a link)? The length of the resulting post makes it difficult to scroll through the previous responses for full thread context.

tyrmullen commented 3 years ago

While our GPU ML does not run on iOS, so performance will be slower, I think with the next update, all MediaPipe CodePens should at least run on iOS Safari. So closing out this bug as fixed.

cbasavaraj commented 2 years ago

Hello, I have a question on Camera vs SourcePicker. At the moment, SourcePicker isn't working on iPhone 12, but Camera is, after adding playsinline="true" crossorigin="anonymous". But Camera doesn't allow changing the size of CanvasElement on the fly, for example, if the phone is turned by 90 degrees. Anyway to get the size field while using Camera?

Source Picker:

new controls.SourcePicker({
      onFrame:
        async (input, size) => {
          resolution = getResolution(size);
          canvasElement.width = resolution.w;
          canvasElement.height = resolution.h;
          await faceMesh.send({image: input});
        },
    }),

Camera:

const camera = new Camera(videoElement, {
  onFrame: async () => {
    await faceMesh.send({image: videoElement});
  },
  width: resolution.w,
  height: resolution.h
});

getResolution:

function getResolution(size) {
  let aspect = size.width / size.height;
  let width = null, height = null;
  if (window.innerHeight < window.innerWidth) {
    height = window.innerHeight;
    width = height * aspect;
  }
  else {
   width = window.innerWidth;
   height = width / aspect;
  }
  return {
    w: width,
    h: height
  }
}
AHMetaCubes commented 2 years ago

Status on working on mobile browsers?

hktalent commented 2 years ago

https://51pwn.com/3dDetect.html me too

image