I was getting very mediocre results when tracking multiple, smaller faces.
This is an example of what I mean:
So I changed faceDetectorImageSize to (1280*720) (it seems like it prevents the program from resizing my image) and it seemed to expand the bounding box enough that I am now detecting all the faces that I need!
However the problem is that now I am getting very 'jittery' results. They self correct, but spend a noticeable amount of time in weird jitter states like this:
and
I was getting considerately more accurate tracking done across multiple targets with this python/dlib library- FaceSwap; although it is very slow relative to this one (i think this might partly be because there is no clever 'tracking' being done for multiple targets).
I'm wondering if this is a limitation for the library or if theres some kind of RoI improvement (?) for reducing this jitter.
It seems the 'jitter' is just a biproduct of the face tracking optimization. You can get better results if you run the tracker every frame but it is extremely slow.
I was getting very mediocre results when tracking multiple, smaller faces.
This is an example of what I mean:
So I changed faceDetectorImageSize to (1280*720) (it seems like it prevents the program from resizing my image) and it seemed to expand the bounding box enough that I am now detecting all the faces that I need!
However the problem is that now I am getting very 'jittery' results. They self correct, but spend a noticeable amount of time in weird jitter states like this:
and
I was getting considerately more accurate tracking done across multiple targets with this python/dlib library- FaceSwap; although it is very slow relative to this one (i think this might partly be because there is no clever 'tracking' being done for multiple targets).
I'm wondering if this is a limitation for the library or if theres some kind of RoI improvement (?) for reducing this jitter.
Thanks :)
Sam