Closed amit0shakya closed 3 years ago
it never takes 30-40 sec to detect faces, it's pretty much sub 1sec.
now, if initial detection takes long time, that is most likely due to your configurating using webgl
backend and tfjs
takes some time to "warmup" the model (compile gles
shaders and upload weighs as textures to gpu). but after initial detection, anything afterwards is really fast.
on the other hand, if you used wasm
backend, warmup doesn't exist, but actual detection is quite a lot of slower than using webgl
(but still well below 1sec threshold).
What backend would your recommend for CPU optimization if we wanted to do detectallface with expression and not gender and age
it's less of a question which models are execution, more of a question where are they executed. each backend has it's pros and cons:
I am using face-api on react and it's face-api older version I think which using tensorflow 1 version. Should I change to @vladmandic/face-api then it can make performance improvements???? I have notice it shows very slow performance on mobile, mostly users will access this through mobile.
no, update is for compatibility reasons. any performance improvements are minor (~5-10%).
what changes should I do to boost its performance. In my case it is taking 30-40 seconds, here is my node_modules for face_api
"@mapbox/node-pre-gyp": "^1.0.4",
"@tensorflow/tfjs-core": "1.7.0",
"@tensorflow/tfjs-node": "1.7.0",
"face-api.js": "^0.22.2",
"canvas": "2.6.1",
we are going back and forth with generic statements - boost performance from what? on what platform?
30-40sec noted in your original post is NOT detect time, i already wrote that.
be as specific as possible. list exact times for each step. list your actual configuration. version of packages alone says nothing. then i may be able to help.
you still did not post exact times. nor did you post your configuration. i don't even know which backend are you using. i just tried using "original sample" link on my android based mobile phone and loading and warmup is ~2sec with any subsequent detection <0.2sec. and your site has soo many elements that face-api is lost between all the noise. not to mention that looking at minified code is not something i would do. i give up :(
My project backend is Nodejs with linux environment and frontend I have impliment on javascript. I am trying to get face LANDMARK .I don't know how to figure out time for each steps.. face-api works on tensiorflow v1 so it workis on olxder version of Node. I am using Node Version 8.
"@tensorflow/tfjs-core": "1.7.0",
"@tensorflow/tfjs-node": "1.7.0",
"canvas": "^2.7.0",
"face-api.js": "^0.22.2",
faceapi.nets.ssdMobilenetv1.loadFromUri(path)
faceapi.nets.faceLandmark68Net.loadFromUri(path)
faceapi.nets.faceRecognitionNet.loadFromUri(path)
faceapi.nets.tinyYolov2.loadFromUri(path)
const mypic = document.getElementById('userimg');
const detections = await faceapi.detectAllFaces(mypic).withFaceLandmarks()
I am accessing original image which is user uploads for face landmark detecttion, normally it is 960x1280.
Well, I don't know why, but for me, the FIRST time, right after it opens my camera, it takes about 30 seconds to find the first face (even if I'm sitting still right in front of it), then, it works perfectly.
Watching the logs, it's NOT related to loading resources...
I pinpointed it to this peace of code:
console.log(1);
faceapi.detectSingleFace(
video,
new faceapi.TinyFaceDetectorOptions({ minConfidence: 0.5 })
)
.withFaceLandmarks()
.then(...) // console.log(2);
.catch(...) // console.log(3);
I see the 1
in the log right away...then, about 30 seconds later I see the 2
.
I call the function again when then
is triggered...from this time on, it works smoothly.
Tried it on a macbook pro in chrome and firefox.
Interestingly enough...the samples/demos run as expected (really quickly).
I'm rendering it on a react component but as far as my logs show, it's not being rendered or processed more times than it should.
@felipenmoura What you're talking about is called warmup and it heavily depends on the backend you're using
For example, WASM
has fast warmup, but slower inference (meaning first frame is faster, but then every other frame is not that great) while WebGL
has much slower warmup (that is most likely what you're seeing), but then inference for each frame is faster than using WASM
backend
This is common for every TensorFlow model, not specific to FaceAPI at all
On a side-note, don't create Options
object each time, do it outside of the loop during component initialization and then re-use it. That doesn't have impact on 30sec delay you're seeing, but will have significant benefit to overall performance
Hm I see. Interesting, thanks. Is there a way I can speed up this first process? I mean, is there a way I can swap/decide between wasm and webgl?
@felipenmoura
simply initialize tensorflow with appropriate backend before calling face-api
and face-api
will use whatever is set:
import * as tf from '@tensorflow/tfjs';
import '@tensorflow/tfjs-backend-webgl';
tf.setBackend('webgl');
await tf.ready();
or
import * as tf from '@tensorflow/tfjs';
import * as wasm from '@tensorflow/tfjs-backend-wasm';
wasm.setWasmPaths('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-backend-wasm@3.8.0/dist/');
tf.setBackend('wasm');
await tf.ready();
note that tfjs
version and version of WASM binaries must match version used by face-api
(which in case of this original version is quite old, it's 1.7.0, definitely not latest 3.8.0)
or use an up-to-date port like https://github.com/vladmandic/face-api
Awesome, thanks. I see some interesting parts of documentation over there, that I hadn't seen yet! I did NOT have tensorflow installed as dependency...does it mean it was actually using my GPU/CPU regular cycles to process it? I'm now trying to build it using the updated repo you linked, thanks.
FaceAPI
is built using TensorFlow/JS no matter what:
FaceAPI
has TFJS 1.7.0 embedded, that's why you don't have external dependenciesFaceAPI
has both embedded and non-embedded version and is based on TFJS 3.8.0 Now, TFJS can use different backends to execute actual ML operations:
Awesome, thanks for this complete reply. I managed to start using wasm and the warmup period is perceivably faster :)
Hey there...sorry for pinging here again, but it seems like there's something I'm missing!
I'm trying to dynamically import the dependencies only if the user selects the "selfie" option. It works, but it never uses wasm!
Promise.all([
import('@tensorflow/tfjs'), // dynamic import
import('@tensorflow/tfjs-backend-wasm') // dynamic import
]).then(async ([tf, wasm]) => {
window.tf = tf;
wasm = wasm;
window.wasm = wasm;
wasm.setWasmPaths('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-backend-wasm@3.9.0/dist/');
await tf.setBackend('wasm');
await tf.ready();
await createScript('/face-detection-ia/face-api.min.js', scrId); // imports the script from /public
faceapi = window.faceapi; // tried this to see if anything would change...but nope
try {
await Promise.all([
faceapi.nets.tinyFaceDetector.loadFromUri('/face-detection-ia/models/'),
faceapi.nets.faceLandmark68Net.loadFromUri('/face-detection-ia/models/'),
]); // this also loads all the models OK
console.log('DONE loading AI stuff');
console.log(faceapi.tf.getBackend()); // webgl <<<<<----- !!!!!!!!!
} catch (error) {
console.error('ERROR loading AI stuff', error);
return;
}
});
}
It IS loading everything. It IS waiting for everything to be ready. It DOES NOT trigger any error...but it still uses webgl and has quite a poor performance and long warmup.
Any idea what I might be missing here?
By the way...I had to do all that because I'm using next.js and its SSR insists on trying to render it in the backend, requiring many other dependencies...I'm trying to avoid that by ensuring it ONLY runs on client side.
By the way 2 ... My
@tensorflow/tfjs
's version is 3.9.0 as well as thetfjs-backend-wasm
's version. Is this what you meant about matching versions?
My @tensorflow/tfjs's version is 3.9.0 as well as the tfjs-backend-wasm's version. Is this what you meant about matching versions?
No, that's smaller part of it. Important part is that @tensorflow/tfjs-backend-wasm
and your wasm binary set by wasm.setWasmPaths
are of the same version - which it looks like they are.
By the way...I had to do all that because I'm using next.js and its SSR insists on trying to render it in the backend,
requiring many other dependencies...I'm trying to avoid that by ensuring it ONLY runs on client side.
you need to tell next.js
not to mess with dependencies
for example, this is next.config.js
i've tested a while back:
module.exports = {
webpack: (config) => {
if (config.target === 'web') config.externals = [ 'fs', 'os', 'util' ];
return config;
},
}
loading WASM modules via next.js
is tricky at best.
if you want to dynamically import wasm and not have SSR, i don't see where is that set?
in the past i've done something like this:
const wasm = dynamic(
() => import('@tensorflow/tfjs-backend-wasm'),
{ ssr: false }
)
and that should be in the component init, not calling faceapi
immediately afterwards - how would you then use faceapi
for the second time? load modules again?
but it still uses webgl and has quite a poor performance and long warmup.
after tf.ready()
, do a console.log(tf.getBackend());
to see what's the actual backend used
btw, this is no longer related to original issue - why not create a new one, 'how to use faceapi with wasm in next.js and if you're using new
faceapi` (which it seems you are since you're loading wasm 3.9.0), open an issue in that git repository.
Sure thing. Thanks, I just created the issue over there: https://github.com/vladmandic/face-api/issues/65
It takes usually 30-40 Seconds to detect faces. Is there any way I can do optimization and get results in 10-15 seconds???