justadudewhohacks / face-api.js

JavaScript API for face detection and face recognition in the browser and nodejs with tensorflow.js
MIT License
16.52k stars 3.68k forks source link

error api face in Motorola E5 #776

Open deylyn opened 3 years ago

deylyn commented 3 years ago

MicrosoftTeams-image (1)

vladmandic commented 3 years ago

Insufficient GPU memory to compile GL shaders.

In general, if you don't have a dedicated GPU, use of WebGL backend is not recommended in any machine learning project - Use WASM backend instead.

deylyn commented 3 years ago

how can I use backend wasm in this case?

vladmandic commented 3 years ago

after tfjs has been loaded, but before faceapi has been used:

await tf.setWasmPaths('./path-to-wasm-files/');
await tf.setBackend('wasm');

and you do need to provide *.wasm files (part of tfjs package itself), typically in node_modules/@tensorflow/tfjs-backend-wasm/dist or download from any CDN

however, in this case quite old as faceapi uses tfjs 1.7.0 and wasm version must match

or use a newer port of faceapi: https://github.com/vladmandic/face-api that's compatible with tfjs 2.x and 3.x

also, recommended to enable SIMD as performance is higher by order of magnitude
for example, go to chrome://flags and enable WebAssembly SIMD support

deylyn commented 3 years ago

I'm using

import * as faceapi from "components / lib / face-api.esm";

await faceapi.tf.setWasmPaths ("../ statics /"); await faceapi.tf.setBackend ("wasm");

but the compilation is very slow, because the file weighs 3MB.

I am using this face-api.esm because it was the one that solved the problem of Insufficient GPU memory to compile GL shaders.

In general, if you don't have a dedicated GPU, use of WebGL backend is not recommended in any machine learning project - Use WASM backend instead.

What other option do I have?

vladmandic commented 3 years ago

@deylyn if you're using https://github.com/vladmandic/face-api, please post issues there

also, size of js file has little to do with execution speed - but yes, it can be minimized to around 1.3mb if that means anything

since your device is really low on memory, make sure you're using tinyFaceDetector model, not ssdMobilenetv1 - it's slightly less accurate, but far less memory demanding. notes on how to use it are in the docs.

deylyn commented 3 years ago

if i am using tinyFaceDetector

import * as faceapi from "components/lib/face-api.esm";

var backendWasm = await this.backendWasm(); Promise.all([faceapi.nets.tinyFaceDetector.loadFromUri("/statics/models")]);

this.video.addEventListener( "play", function() { self.canvasFace = document.getElementById("c1"); self.displaySize = { width: self.videoWidth, height: self.videoHeight }; faceapi.matchDimensions(self.canvasFace, self.displaySize); self.timerCallback(); }, false ); timerCallback: function() { if (this.video.paused || this.video.ended) { return; } const box = { x: 75, y: 37.5, width: 150, height: 225 }; var lenghtx = box.x + box.width; var lenghty = box.y + box.height; // see DrawBoxOptions below const drawOptions = { lineWidth: 2, boxColor: "#23b2be" }; const drawBox = new faceapi.draw.DrawBox(box, drawOptions); drawBox.draw(this.canvasFace);

  let inputSize = 128;
  let scoreThresholds = 0.1;
  this.id = setInterval(async () => {
    let self = this;
    const useTinyModel = true;
    const detections = await faceapi.detectAllFaces(
      self.video,
      new faceapi.TinyFaceDetectorOptions({ inputSize, scoreThresholds })
    );

    // resize the detected boxes in case your displayed image has a different size than the original
    const resizedDetections = faceapi.resizeResults(
      detections,
      self.displaySize
    );
    let boxArea = 35000;
    if (self.$q.screen.xs) {
      boxArea = 35000;
    } else {
      boxArea = 21000;
    }

    if (resizedDetections[0]) {
      let anchBoundBox = resizedDetections[0].box["width"] - 15;
      let xBox = box.x - 35;
      let altBoundBox = resizedDetections[0].box["height"] - 15;
      let yBox = box.y + 45;
      if (
        anchBoundBox + resizedDetections[0].box["x"] > lenghtx ||
        resizedDetections[0].box["x"] < xBox ||
        resizedDetections[0].box["y"] < yBox ||
        altBoundBox + resizedDetections[0].box["y"] > lenghty
      ) {
        self.validCentered = false;
        self.textAlert = "Centre su rostro";
        self.paintBoxAlert(self.textAlert);
        self.$emit("detect-centered", false);
      } else {
        self.validCentered = true;
        self.$emit("detect-centered", true);
      }
      if (resizedDetections[0].box.area > boxArea) {
        self.validProximity = true;
        self.$emit("detect-proximity", true);
      } else {
        self.textAlert = "Acerque su rostro";
        self.paintBoxAlert(self.textAlert);
        self.validProximity = false;
        self.$emit("detect-proximity", false);
      }
    } else {
      self.validProximity = false;
      self.validCentered = false;
    }
    if (self.validCentered && self.validProximity) {
      self.canvasFace
        .getContext("2d")
        .clearRect(0, 0, self.canvasFace.width, 36);
    }

  }, 250);
}
deylyn commented 3 years ago

async backendWasm() { await faceapi.tf.setWasmPaths("../statics/"); await faceapi.tf.setBackend("wasm"); },

vladmandic commented 3 years ago

@deylyn like I said - if you're using https://github.com/vladmandic/face-api, please post issues there

but there are several major issues here:

timhwang21 commented 1 year ago

Sorry for the issue revival, but:

you're creating new instance of model on each frame:

Could you explain why this is an issue? Is it just JS object allocation? It appears this options object is just a plain object. And when passing this to detect, nothing particularly special is happening: inputSize, scoreThreshold are read off of it and that's it. I've noticed you make this recommendation several times and am curious what makes this a major issue? Thank you!

vladmandic commented 1 year ago

This is a 2y old thread - so really not a good form to quote on it. Question may be valid, but that's what discussions aew for.

And it's really a bad practise. Yes, it may be a lightweight allocation, but the fact is you don't know that as normal user so doing that here means you're going to do that elsewhere as well. It's just bad practice. But yes, other two issues I've mentioned are more important.