Open stephanschulz opened 3 weeks ago
Hi @stephanschulz. Yes, I believe the simplest way is to use the library with Python, or with NodeJS/JS you can utilize the task or tflite files instead of the CDN.
In the dir models you will see hand_landmarker.task. This file can be used to run the hand pose detection offline.
Instead of loading via CDN like I did, you can use the path to the task file. You also need to download the web assembly (WASM) files into a dir inside the project.
An example of just the hand task initialization would be something like
async function createGestureRecognizer() {
const filesetResolver = await FilesetResolver.forVisionTasks(
"/public/wasm" // Use the local path to your WASM files
);
gestureRecognizer = await GestureRecognizer.createFromOptions(filesetResolver, {
baseOptions: {
modelAssetPath: "/models/hand_landmarker.task", // Use the local path to your task file
delegate: delegateType // "GPU" or "CPU"
},
runningMode: "VIDEO",
numHands: 2
});
}
For more info check out the Google documentation.
thank you for sharing this project.
I was hoping to find a mediapipe (mainly handpose) implementation that work completely offline. seeing how you are using a local sever to handle a lot of the mediapipe stuff made me first think that yours can run offline. but I am noticing you using CDN here https://github.com/heyfoz/nodejs-mediapipe/blob/08d1b5127aa9eb8a5fd425853c8a16ff24cbed59/public/js/full_detection.js#L23
do you have any advice on the most simplest example for doing everything with local files?