Closed delebash closed 3 years ago
Is there anyway to use the mediapipe models in web ?? Like how it happening in the visualizer ??
I was successfully able to extract web application manually. But I do not know anything about the api or how to get the data out.
Hi i have extracted like this already, but its too difficult to get anything out of this.. it is not the right way i feel.
Yes I agree but there is no documentation on how to do it
yes, since it working fine independently. there should be a proper way to do this. if not a final version at least a beta version to understand how it work.
We will likely release MediaPipe Solutions API for python and javascript in Oct/nov that will enable javascript developers to in a few lines to use the MP hands or MP face mesh in their web apps
Thanks!
I would like to use this on a react application as well.
I have the same question. Is there any way to run MediaPipe in my own web app today?
@mgyong since this is a recurrent request/issue would it be possible to keep one issue open (either this one or a new one) to track progress?
We will likely release MediaPipe Solutions API for python and javascript in Oct/nov that will enable javascript developers to in a few lines to use the MP hands or MP face mesh in their web apps
Are these 2 API @tfjs-models/handpose and @tfjs-models-facemesh ?
Let me know if I understand correctly :
process
to process a frame on CPU, processGL
to process frame on GPU. It means that computationnal functions such as model loading or inferencing are executed with WebAssembly. That allow us to load .tflite models.I noticed performances and quality differences between theses 2 demos : tfjs-demo and mediapipe-demo. The tfjs demo is less precise and runs with lower FPS than the MediaPipe.
Where does it comes from ?
Same issue here, I managed to make it work locally with breathtaking performances. But unable to do anything with it ! Now that you gave us hope, give us the API.... Please !
@mgyong Could you kindly give an update on when to reasonably expect an API release? Thanks!
MP
is there a timeline for it?
We will be releasing MP solutions Javascript API during week of 9 Dec.
Hi @mgyong
Great job for the @mediapipe/hands
package :fireworks: This is what we needed !
Just a few questions :
visibility
field ? does it states if the joint is occluded or not ? it is currently undefined
Thanks !
Hi @mgyong ,
thanks for making the JS solutions so easily accessible. I was wondering what's the timeline for being able to run whole custom pipelines in JS. We have models that build on the pose models and currently run them on Desktop with some custom calculators and a modified graph. Are there any plans to allow these things in JS as well? The current work-around is to run these models separately in TF.js, which requires extra work to make this accessible for the web.
There was a previous work-around that run these models in TF.js and the algorithm was written in javascript : @tfjs-models. They are native as much js API as type of algorithm (hand, face...) Mediapipe has recently released similiar API that runs Mediapipe algorithms in webassembly. Unfortunately, we don't know how to build wasm with source code. So we only benefit what Mediapipe already has ported and the disadvantage of this is that we cannot fully customize algorithms. I hope this will be the case in a near future.
in the javascript version i cannot find z cordinate values.. it gives only x and y. am i missing something ?
in the javascript version i cannot find z cordinate values.. it gives only x and y. am i missing something ?
If you are looking for the Z means that you want to solve points coordinates in 3D space. I might be wrong but I think that the only thing you'll find are 2D coordinates in your 2D image space. The models here analyze a 2D image and "map" a 3D model upon it. If you use directly the tensorflow model you might be able to extract 3D coordinates of the model but in it's own space. Real 3D coordinates would require to process many more unknow data (camera fov, lens distorsion, model dimensions, etc....). Have a look at https://github.com/spite/FaceMeshFaceGeometry . in the facemesh exemple you'll notice that the red ball don't move according to z, in fact the face mesh only grows and shrink.
@gonzalle If what you say is a problem then why is mediapipe advertising the prediction of 3D coordinates. See this doc in facemesh https://google.github.io/mediapipe/solutions/face_mesh#models
@delebash : this is only about the normalized 3D space of the mask model . For instance the Z of the tip of the noise is relative to the axis center of the mask. This is the conclusion that I came to after exploring the API.
@gonzalle Thanks for the info.
@mgyong could you kindly provide information on when can we expect being able to run whole custom pipelines on the web? We have a solution that uses a custom tflite model and a custom pipeline and we have successfully used MediaPipe to build a desktop app and a mobile app, but the most important part for us (web app) is missing. If you can just provide whether we can expect it in a frame of weeks or months or "definitely not soon", it will greatly help us in our planning. Many thanks in advance!
Hi
the source code provided for mediapipe hands in javascript is not working in mobile devices tested in various devices.
even the demo provided is also not working. is there any workaround ?
@rahulscp i experienced the same on android chrome but everything is working in firefox. You could give that a try for now.
but that is not a solution right ?
@rahulscp i experienced the same on android chrome but everything is working in firefox. You could give that a try for now.
not working with Firefox on iOS mobile devices
We do not currently test on FireFox. We have been rolling out improvements to iOS support.
I've only been recently assigned to this issue... the demos on codePen are meant to demonstrate how to use the API.
Documentation starts at: https://google.github.io/mediapipe/solutions/solutions.html
You can also get to documentation by clicking on the "Click here for more info" link presented with each demo. The shortcut will take you right to the correct solution in the docs.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.
Closing as stale. Please reopen if you'd like to work on this further.
Hi, I want to use "iris landmark detection" in my web app, and at the current time, there is no JS API for it. Is there any other way? Like running the CPP version on WebGL or something?
I don't need the visualizer I just want to be able to run the MediaPipe Hands with Multi-hand support in my webapp. From my understanding the code is compiled into wasm and then run from a webapp. How would I include the Hands with Multi-hand support app in my on web application?