Closed ionif closed 3 years ago
Looked into the selfie-segmentation issue a bit, and found what I believe to be some texture memory handling issues. I don't have the bandwidth to fully fix the underlying problems at the moment, but I created a patch to at least allow selfie-segmentation to run on iOS (tested successfully on my old iPhone6S). We'll try to get this integrated into the JS Solutions codebase as soon as possible.
Looked into the selfie-segmentation issue a bit, and found what I believe to be some texture memory handling issues. I don't have the bandwidth to fully fix the underlying problems at the moment, but I created a patch to at least allow selfie-segmentation to run on iOS (tested successfully on my old iPhone6S). We'll try to get this integrated into the JS Solutions codebase as soon as possible.
Kindly update about the current status.
The team is still discussing how best to conditionally apply the patch. Will update again when I have something more substantive.
The team desired a specific form for the patch; I have now refactored everything accordingly and submitted the patch. The next update to the selfie-segmentation JS Solution will contain the fix. All other iOS Solutions will also have slightly altered behavior with their next updates, but the impact there will hopefully be minimal/negligible.
Also, it should be noted this issue was NOT iOS-specific, but rather occurred due to using the selfie-segmentation module with CPU ML inference. So in case it helps other people running into similar odd selfie-segmentation issues, the workaround patches were all just different ways to force the TensorsToSegmentationCalculator to use the "use_gpu=true" path despite ML being performed on the CPU (and thus having mediapipe::Tensor objects with only CPU backing by default).
On an unrelated note, @jadams777: do you mind trimming the console output from your last message a bit (or replacing it with a link)? The length of the resulting post makes it difficult to scroll through the previous responses for full thread context.
While our GPU ML does not run on iOS, so performance will be slower, I think with the next update, all MediaPipe CodePens should at least run on iOS Safari. So closing out this bug as fixed.
Hello, I have a question on Camera vs SourcePicker. At the moment, SourcePicker isn't working on iPhone 12, but Camera is, after adding playsinline="true" crossorigin="anonymous"
. But Camera doesn't allow changing the size of CanvasElement on the fly, for example, if the phone is turned by 90 degrees. Anyway to get the size
field while using Camera?
Source Picker:
new controls.SourcePicker({
onFrame:
async (input, size) => {
resolution = getResolution(size);
canvasElement.width = resolution.w;
canvasElement.height = resolution.h;
await faceMesh.send({image: input});
},
}),
Camera:
const camera = new Camera(videoElement, {
onFrame: async () => {
await faceMesh.send({image: videoElement});
},
width: resolution.w,
height: resolution.h
});
getResolution:
function getResolution(size) {
let aspect = size.width / size.height;
let width = null, height = null;
if (window.innerHeight < window.innerWidth) {
height = window.innerHeight;
width = height * aspect;
}
else {
width = window.innerWidth;
height = width / aspect;
}
return {
w: width,
h: height
}
}
Status on working on mobile browsers?
Hello all,
I have a project using Mediapipe Hands on iOS and I've been trying to update from the tfjs model to the new Mediapipe api but even when I enable WebGL2, it still fails to work. I've made sure I'm asking permission using navigator.getmedia properly. Wondering if anyone has any ideas on what's going wrong.
Here's the codepen that I'm testing: https://codepen.io/aionkov/pen/MWjEqWa
Here's the console: