Closed IgorSamer closed 2 years ago
methods in human.draw
are only helper methods to do some predefined draw outputs, what you want is custom draw
you already have both canvas and detection results. from that get face box coordinates and draw text on screen at given coordinates. take a look at https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/Tutorial/Drawing_text and if you get stuck, let me know.
I knew it could be done directly through the canvas and the coordinates, but I just thought that there might be some method in Human that I haven't found yet that does that too.
Another question: I'm using setTimeout(drawLoop, 30)
that I saw on your demo, but I know the same can be done through requestAnimationFrame(() => drawLoop())
. Would requestAnimationFrame
be more suitable or would it give the same result?
And I saw on https://github.com/vladmandic/human/wiki/Backends#how-to-enable-wasm-simd-support that for WASM
the results are better if using WASM SIMD
, but I didn't find that WebAssembly SIMD support
option in the flags. What would be the correct option?
I knew it could be done directly through the canvas and the coordinates, but I just thought that there might be some method in Human that I haven't found yet that does that too.
If you have a suggestion for some built-in helper methods, let me know.
Another question: I'm using setTimeout(drawLoop, 30) that I saw on your demo, but I know the same can be done through requestAnimationFrame(() => drawLoop()). Would requestAnimationFrame be more suitable or would it give the same result?
They both work. browser execution targets requestAnimationFrame
to be as near as possible to 60FPS, so that gives a higher refresh rate, but leaves a little bit less time available for background detection while setTimeout(drawLoop, 30)
targets 30FPS with a bit more time left for detection - it all depends on which you prefer.
Both simd
and multi-threading
are (finally) enabled by default in latest versions of Chrome.
You can check what human
detected (all environment values are available in env
namespace):
import * as Human from '@vladmandic/human`;
Human.env.updateBackend();
console.log(Human.env.wasm)
I'm closing this issue as its not a product issue, but feel free to post further on this thread.
If you have a suggestion for some built-in helper methods, let me know.
I'm currently doing something like this:
if (interpolated.face[0]) {
context.strokeStyle = '#fff';
context.strokeText(label || 'unknown', interpolated.face[0].box[0], interpolated.face[0].box[1] - 10);
context.fillText(label || 'unknown', interpolated.face[0].box[0], interpolated.face[0].box[1] - 10);
}
It is the expected result but I think there could be an option to define a label, its position (top or bottom) and its stroke color directly in DrawOptions
, thus having to rename the drawLabels
property to drawDefaultLabels
, for example. This label would be above or below the box lines and not inside, so using drawDefaultLabels: true
and label: 'Text'
wouldn't be a problem, as the default labels are drawn inside.
drawOptions = { drawDefaultLabels: false, label: 'Label', labelStrokeColor: 'white', labelPosition: 'top' };
I think it would be simpler and more intuitive. I even opened this issue even though I knew I could use fillText
because I remember seeing in some other issue (I don't remember which one now) you commenting about an error or some overload when using canvas.getContext('2d'), since Human/face-api/TF also use context at the same time (correct me if I'm wrong).
Both simd and multi-threading are (finally) enabled by default in latest versions of Chrome.
Setting the backend to WASM and checking human.env.wasm
, I got:
wasm {
backend: true,
multithread: false,
simd: true,
supported: true
}
And there is also this message in the console: Human: wasm execution: simd singlethreaded
.
Should multithread
be true
for better performance? If so, how to enable it?
drawOptions = { drawDefaultLabels: false, label: 'Label', labelStrokeColor: 'white', labelPosition: 'top' };
Hmm, valid idea.
Not sure how clean it is (easy to implement, i mean from user perspective). I'll give it some thought.
And there is also this message in the console: Human: wasm execution: simd singlethreaded.
I just tried with Chrome and Edge and both report simd: true
and multithreaded: true
on my test system.
Should multithread be true for better performance? If so, how to enable it?
Yes, although performance gains are not as major (SIMD does make a huge difference, over double).
But there is no way to force-enable multi-threading in Chrome anymore - no idea why its disabled on your system.
But there is no way to force-enable multi-threading in Chrome anymore - no idea why its disabled on your system.
Alright.
I noticed that when drawLabels
and iris
module are enabled it shows a distance
value in labels (iris
property in the result of interpolated.face
). This value is the distance from the face to the camera, right?
And in gyms that have my system the webcam is in front of turnstile, so we detect the person and allow or deny access. However, in some cases the lighting is not the best, leaving the face very lit or with more shadows.
In these cases if I adjust the contrast, brightness, etc, or even the negative through the filter
module, can this help the algorithm with the face detections or filters are just for visualization purposes?
This value is the distance from the face to the camera, right?
Correct. Its a guess based on the size of the iris (as iris sizes are pretty constant on all people). So if iris is nicely visible, its a good guess, otherwise its not really :)
In these cases if I adjust the contrast, brightness, etc, or even the negative through the filter module, can this help the algorithm with the face detections or filters are just for visualization purposes?
Correct. Thats what filters are for, not just visualization! There is also built-in histogram equalization (just enable in config) which helps a lot with darker scenes (as longs as they are not too noisy).
i like your requirement for custom labels, but ive decided to implement it differently - as label templates
see https://github.com/vladmandic/human/wiki/Draw for details
btw, code is on github main branch, but not yet released on npm
it will be released as part of upcoming 3.0 release as soon as some remaining bugs in tfjs
are sorted
Nice! Thank you so much for the clarifications!
I'm using the library without any problems and it is detecting and registering faces normally. However, I confess that I'm having difficulties following the documentation.
When I was using
face-api.js
and I got the detection results, I used:faceapi.draw.DrawBox(box, options).draw(canvas)
and then I passed the result box as a parameter and in the options I informed a label and also the box color.I'm following your examples (code below) to detect faces directly from the webcam and recognize the faces already registered, but I don't know how to draw the label or if the code I have so far is written in the best way.
How to proceed? Thanks!