Open Ahmedsaber9 opened 2 years ago
@pospielov
It can be implemented using the landmarks
plugin.
using it, you can check what the current head pose is.
We don't have an article about it, but here is an example of how amazon recommends implementing it using their facial recognition service:
https://aws.amazon.com/blogs/industries/liveness-detection-to-improve-fraud-prevention-in-financial-institutions-with-amazon-rekognition/
There is an even more powerful plugin landmarks2d106
that returns 106 points. Using it you can check if the mouth or eyes are opened.
By default, landmarks2d106
is not included in CompreFace. It replaces the standard landmarks
plugin, this is why we can't support both.
You can build CompreFace with this plugin using instructions from here:
https://github.com/exadel-inc/CompreFace/tree/master/embedding-calculator
Or just use my prebuilt image:
pospielov/compreface-core:1.0.0-mobilenet-2d106
much thanks, What about the Livens challenges ? I took a look at what you shared with me "https://aws.amazon.com/blogs/industries/liveness-detection-to-improve-fraud-prevention-in-financial-institutions-with-amazon-rekognition/"
but the whole code is closeted to amazon infrastructure, do you recommend any project or something like that to integrate the face livens test?
Unfortunately, I don't know about such projects. I am afraid you need to implement it by yourself, I believe it's possible to make it work locally, without amazon infrastructure
you need to stop CompreFace: docker-compose down
then replace image in docker-compose file, instead of:
restart: always
image: ${registry}compreface-core:${CORE_VERSION}
container_name: "compreface-core"
environment:
- ML_PORT=3000
it will be:
restart: always
image: pospielov/compreface-core:1.0.0-mobilenet-2d106
container_name: "compreface-core"
environment:
- ML_PORT=3000
then run CompreFace again
Worked like charm! I will push my contraption soon! many thanks
Yes, I know that GUI is broken, this is why it's not in the official distribution.
Any update on this? since this is an old issue any chance there is there a new plugin or something to do liveness now?
Hello I want to know if the detection (The user should perform specific eye. mouth, or head movements) is that supported or could be implemented if any has any idea to get that done I'm open to knowing.
Thanks