Open deylyn opened 3 years ago
Insufficient GPU memory to compile GL shaders.
In general, if you don't have a dedicated GPU, use of WebGL
backend is not recommended in any machine learning project - Use WASM
backend instead.
how can I use backend wasm in this case?
after tfjs
has been loaded, but before faceapi
has been used:
await tf.setWasmPaths('./path-to-wasm-files/');
await tf.setBackend('wasm');
and you do need to provide *.wasm files (part of tfjs
package itself), typically in node_modules/@tensorflow/tfjs-backend-wasm/dist
or download from any CDN
however, in this case quite old as faceapi
uses tfjs
1.7.0 and wasm
version must match
or use a newer port of faceapi
: https://github.com/vladmandic/face-api that's compatible with tfjs 2.x and 3.x
also, recommended to enable SIMD
as performance is higher by order of magnitude
for example, go to chrome://flags and enable WebAssembly SIMD support
I'm using
import * as faceapi from "components / lib / face-api.esm";
await faceapi.tf.setWasmPaths ("../ statics /"); await faceapi.tf.setBackend ("wasm");
but the compilation is very slow, because the file weighs 3MB.
I am using this face-api.esm because it was the one that solved the problem of Insufficient GPU memory to compile GL shaders.
In general, if you don't have a dedicated GPU, use of WebGL backend is not recommended in any machine learning project - Use WASM backend instead.
What other option do I have?
@deylyn if you're using https://github.com/vladmandic/face-api
, please post issues there
also, size of js file has little to do with execution speed - but yes, it can be minimized to around 1.3mb if that means anything
since your device is really low on memory, make sure you're using tinyFaceDetector
model, not ssdMobilenetv1
- it's slightly less accurate, but far less memory demanding. notes on how to use it are in the docs.
if i am using tinyFaceDetector
import * as faceapi from "components/lib/face-api.esm";
var backendWasm = await this.backendWasm(); Promise.all([faceapi.nets.tinyFaceDetector.loadFromUri("/statics/models")]);
this.video.addEventListener( "play", function() { self.canvasFace = document.getElementById("c1"); self.displaySize = { width: self.videoWidth, height: self.videoHeight }; faceapi.matchDimensions(self.canvasFace, self.displaySize); self.timerCallback(); }, false ); timerCallback: function() { if (this.video.paused || this.video.ended) { return; } const box = { x: 75, y: 37.5, width: 150, height: 225 }; var lenghtx = box.x + box.width; var lenghty = box.y + box.height; // see DrawBoxOptions below const drawOptions = { lineWidth: 2, boxColor: "#23b2be" }; const drawBox = new faceapi.draw.DrawBox(box, drawOptions); drawBox.draw(this.canvasFace);
let inputSize = 128;
let scoreThresholds = 0.1;
this.id = setInterval(async () => {
let self = this;
const useTinyModel = true;
const detections = await faceapi.detectAllFaces(
self.video,
new faceapi.TinyFaceDetectorOptions({ inputSize, scoreThresholds })
);
// resize the detected boxes in case your displayed image has a different size than the original
const resizedDetections = faceapi.resizeResults(
detections,
self.displaySize
);
let boxArea = 35000;
if (self.$q.screen.xs) {
boxArea = 35000;
} else {
boxArea = 21000;
}
if (resizedDetections[0]) {
let anchBoundBox = resizedDetections[0].box["width"] - 15;
let xBox = box.x - 35;
let altBoundBox = resizedDetections[0].box["height"] - 15;
let yBox = box.y + 45;
if (
anchBoundBox + resizedDetections[0].box["x"] > lenghtx ||
resizedDetections[0].box["x"] < xBox ||
resizedDetections[0].box["y"] < yBox ||
altBoundBox + resizedDetections[0].box["y"] > lenghty
) {
self.validCentered = false;
self.textAlert = "Centre su rostro";
self.paintBoxAlert(self.textAlert);
self.$emit("detect-centered", false);
} else {
self.validCentered = true;
self.$emit("detect-centered", true);
}
if (resizedDetections[0].box.area > boxArea) {
self.validProximity = true;
self.$emit("detect-proximity", true);
} else {
self.textAlert = "Acerque su rostro";
self.paintBoxAlert(self.textAlert);
self.validProximity = false;
self.$emit("detect-proximity", false);
}
} else {
self.validProximity = false;
self.validCentered = false;
}
if (self.validCentered && self.validProximity) {
self.canvasFace
.getContext("2d")
.clearRect(0, 0, self.canvasFace.width, 36);
}
}, 250);
}
async backendWasm() { await faceapi.tf.setWasmPaths("../statics/"); await faceapi.tf.setBackend("wasm"); },
@deylyn like I said - if you're using https://github.com/vladmandic/face-api, please post issues there
but there are several major issues here:
new faceapi.TinyFaceDetectorOptions({ inputSize, scoreThresholds })
setInterval
, refactor the code to call itself upon completion using requestAnimationFrame
scoreThreshold
to a reasonable value
low values mean that library has to do far more processing for all possible false-positivesSorry for the issue revival, but:
you're creating new instance of model on each frame:
Could you explain why this is an issue? Is it just JS object allocation? It appears this options object is just a plain object. And when passing this to detect
, nothing particularly special is happening: inputSize, scoreThreshold
are read off of it and that's it. I've noticed you make this recommendation several times and am curious what makes this a major issue? Thank you!
This is a 2y old thread - so really not a good form to quote on it. Question may be valid, but that's what discussions aew for.
And it's really a bad practise. Yes, it may be a lightweight allocation, but the fact is you don't know that as normal user so doing that here means you're going to do that elsewhere as well. It's just bad practice. But yes, other two issues I've mentioned are more important.