robmarkcole / HASS-Deepstack-face

Home Assistant custom component for using Deepstack face recognition
https://community.home-assistant.io/t/face-and-person-detection-with-deepstack-local-and-free/92041
MIT License
218 stars 68 forks source link

face was recognized even with below than minimum confidence threshold.. how to avoid? #50

Closed ocsele closed 3 years ago

ocsele commented 3 years ago

Dear Rob,

please help me with your fantastic piece of software. I taught my face to deepstack with the deepstack_teach_face service. I sadly noted that deepstack identified my garbage man as myself :)

What I mean is that during latest recognition, it gave me back my name as recognized, but with a lower confidence threshold than default (which is 67% if I'm not mistaken), with 59%. I guess the face registering process is done with this default confidence threshold, right? I couldn't find a way how to set minimum threshold with the service during registration process.

kép

Why did it return my name when the confidence is so low? How can avoid situations like this in the future? Many thanks for your efforts Rob!

DivanX10 commented 3 years ago

Robmarkcole integration has nothing to do with facial recognition accuracy. Robmarkcole integration is just a layer between the Deepstack server and the Home Assistant. You need to write to the authors of deepstack

csthomas2 commented 3 years ago

I am having this issue as well; every face is being recognized as one of the 3 faces I've taught Deepstack. It returns a result with a confidence level below the default threshold. The Beatles photo that is bundled with Facebox, when presented to Deepstack returns 16 results. Somehow, those 16 individuals are all reported as matched to 1 of the 3 faces it knows.

As you can see in the documentation for Deepstack, the REST API supports setting a custom threshold value when posting to the server. The HASS integration takes care of these requests.

https://docs.deepstack.cc/face-recognition/index.html#setting-minimum-confidence

DevTodd commented 3 years ago

This issue drove me nuts since someone who has a confidence of 0.01% is triggered as a learned person. There should be the ability to pass the confidence level to deepstack to return unknown. The only time it actually returns unknown without the confidence level being passed is if there's no face to be detected.

While I can't change the owners code, I can create an automation that will filter to my liking. I'll include that below for anyone who wants a hack of a fix. The only issue is that the image returned will be boxed if you have boxes turned on because Deepstack is returning a match, I am just filtering my textual message based on my requirements.

alias: Process Data from Facial Recognition description: '' trigger:

DivanX10 commented 3 years ago

You can use both this option. Add a code to the integration deepstack_face itself that will output names with the condition that this person has confidence greater than 70. It really works and now I don't get false recognition.

After you add the code to the deep stack_force integration, you need to create a sensor that will extract the names

sensor:
  - platform: template
    sensors:
      persons_names:
        friendly_name: 'Names of identified persons'
        value_template: >
            {{ states.image_processing.your camera is indicated here.attributes.faces[0].bounding_box.names }}

If you want to filter out unknown persons, then use this condition in the automation itself, it will exclude unknown persons and will only work if the sensor shows the name of the identified person

{% if 'unknown' in states('sensor.persons_names') %}
false
{% else %}
true
{% endif %}

I also recommend trying this deepstack client. It is light, not voracious, there is a gallery of photos, in the new version the author will add editing photos in the gallery and also deleting photos from the gallery

DevTodd commented 3 years ago

@DivanX10 while that would work, it's far more complicated than just extracting the data from what we already have. Not only that, but you have to modify the code of the integration each time you want to add a new person and if you update the integration, unless you fork your own.

Additionally, there are 3 conditions as I see it. A person who's face isn't visible, a person who's face is visible and unknown and a person who's face is visible and known.

If you wanted to do name extraction only you could easily modify my example for a sensor and do the same thing without having to alter any code.

DivanX10 commented 3 years ago

As for the integration update, it seems to me that the author abandoned this project. Even if it updates the integration, I will be able to add the code there, although I asked the author to add it to the integration read here . Yes, this is also a good option, but how, for example, to display a list of identified persons? For example, from 2 to 5 faces came into the camera's field of view and I would like to get a picture on my phone and a list of the names of the identified persons. In addition, I translate names into my native language. Because I agree with you that it is better to use code in automation than to get into the code of the integration itself.

Below are the attributes of the identified persons and I would like to collect all the names in one line. Do you have any ideas on how to implement this?

For reference. The name attribute: Person 1, Person 2 , this is the code I edited in the deep stack_force integration, which outputs the names of identified persons in one line

faces:
  - name: Person 1
    confidence: 59.352
    bounding_box:
      height: 0.066
      width: 0.029
      y_min: 0.414
      x_min: 0.617
      y_max: 0.48
      x_max: 0.646
      names: Person 1, Person 2
    prediction:
      confidence: 0.59351665
      userid: Person 1
      y_min: 447
      x_min: 1185
      y_max: 518
      x_max: 1240
    entity_id: image_processing.detect_face_XXXXXXX
  - name: Person 2
    confidence: 70.478
    bounding_box:
      height: 0.224
      width: 0.097
      y_min: 0.374
      x_min: 0.197
      y_max: 0.598
      x_max: 0.294
    prediction:
      confidence: 0.7047771
      userid: Person 2
      y_min: 404
      x_min: 378
      y_max: 646
      x_max: 564
    entity_id: image_processing.detect_face_XXXXXXX
total_faces: null
total_matched_faces: 0
matched_faces: {}
last_detection: 2021-08-23_02-51-37
friendly_name: detect_face_XXXXXXXXXX
device_class: face
DevTodd commented 3 years ago

It will loop over the names as shown here: Screenshot_20210822-172431

DivanX10 commented 3 years ago

Thank you, you gave me an idea and I made several options. I made a sensor and an auxiliary number element that will allow you to set confidence directly in lovelace, which is very convenient. Below I use the code

Displays a list of the confidence level for each person {{ state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','confidence')| map(attribute='confidence') | join(', ') }}

Displays a list of names {{ state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','name')| map(attribute='name') | join(', ') }}

image

sensor:
  - platform: template
    sensors:
      persons_names:
        friendly_name: 'Names of identified persons'
        icon_template: mdi:face-recognition
        value_template: >
          {% set detect_face = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','name')| map(attribute='name') | join(', ') %}
          {% set confidence_face = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','confidence')| map(attribute='confidence') | join(', ') %}
          {% set set_confidence = states('input_number.deepstack_confidence_face')%}
          {% if detect_face and confidence_face >= set_confidence %}
          {{ state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','name')| map(attribute='name') | join(', ') }}
          {% else %}
          unknown
          {% endif %}

And if you want to send in a telegram, then this option is

alias: 'Process Data from Facial Recognition'
description: ''
trigger:
  - platform: state
    entity_id: image_processing.detect_face_eufy_camera
condition: []
action:
  - service: telegram_bot.send_photo
    data:
      file: /config/www/deepstack/snapshots/detect_face_eufy_camera_latest.jpg
      caption: >
        {% if is_state('image_processing.detect_face_eufy_camera', 'unknown') %}
        {% else %}    
        *Someone's in the hallway:* {% set detect_face =
        state_attr('image_processing.detect_face_eufy_camera','faces') |
        selectattr('faces','!=','name')| map(attribute='name') | join(', ') %}
        {% set confidence_face =
        state_attr('image_processing.detect_face_eufy_camera','faces') |
        selectattr('faces','!=','confidence')| map(attribute='confidence') |
        join(', ') %} {% set set_confidence =
        states('input_number.deepstack_confidence_face')%} {% if detect_face and
        confidence_face >= set_confidence %} {{
        state_attr('image_processing.detect_face_eufy_camera','faces') |
        selectattr('faces','!=','name')| map(attribute='name') | join(', ') }}
        {% else %} unknown {% endif %}{% endif %}
      target: 11111111
      disable_notification: false
  - service: telegram_bot.send_photo
mode: single
DevTodd commented 3 years ago

Glad it inspired you! Looks good although I only speak English and Spanish :P

DivanX10 commented 3 years ago

Can you tell me how to make it right here so that the names in Russian are displayed in a list? The problem is that it translates only one name, but if several names are identified, then this does not work, and I need it to be Igor, Oleg, Masha instead of Igor, Oleg, Masha, and it changes depending on the identified person

This works in the python file, but it can't be made to work in the Home Assistant

        faces.append(
            {"name": name, "confidence": confidence, "bounding_box": box, "prediction": pred}
        )
        if name in ['Divan', 'divan'] and confidence > 70:
            name = 'Диван'
        elif name in ['Oleg', 'oleg'] and confidence > 70:
            name = 'Олег'
        elif name in ['Michael', 'michael'] and confidence > 70:
            name = 'Майкл'
        elif name in ['Toni', 'toni'] and confidence > 70:
            name = 'Тони'
        elif name in ['Julianna', 'julianna'] and confidence > 70:
            name = 'Джулианна'
        else:
            name = 'unknown'
        names_list.append(name)
    faces[0]['bounding_box']['names'] = ', '.join(names_list)
    return faces

This only works if one name is recognized, and with several names it does not work We recognize faces

{% set names = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','name')| map(attribute='name') | list | join(', ') %}
{% set total_faces = state_attr('image_processing.detect_face_eufy_camera','total_faces') %}
{% set confidence_face = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','confidence')| map(attribute='confidence') | join(', ') %}
{% if names in ["Igor" , "igor"]  and confidence_face > '60' %} 
  {% set names_list = "Игорь" %}
{% elif names in ['Oleg', 'oleg'] and confidence_face > '60' %} 
  {% set names_list = "Олег" %}
{% elif names in ['Masha','masha'] and confidence_face > '60' %} 
  {% set names_list = "Маша" %}
{% elif names in ['Marina','marina'] and confidence_face > '60' %} 
  {% set names_list = "Марина" %}
{% elif names in ['unknown'] %} 
  Неизвестное лицо
{% endif %}
{{ names_list }}
DevTodd commented 3 years ago

You should be able to do a regex replacement in an HA automation.

name: "{{ trigger.event.data.args | regex_replace(find='[^\\w]', replace='') }}"

DivanX10 commented 3 years ago

Thank you for the hint. By the way, there was a solution. it was right on the surface. I was really surprised by this, because in the deepstack client, you can not specify in Russian, but only in English. But you can't do this in the robmarkcole/deepstack-ui client, but you can do it in the techblog/deepstack-trainer client. It made my life a lot easier 😃

ocsele commented 3 years ago

wow, what a traction my question got, super fantastic!! Thank you both for delivering various options to try, I can't wait to try to fix my integration, and enjoy the fantastic world of AI enforced automations 😃

DivanX10 commented 3 years ago

wow, what a traction my question got, super fantastic!! Thank you both for delivering various options to try, I can't wait to try to fix my integration, and enjoy the fantastic world of AI enforced automations 😃

I have published a post with a detailed description so that people can implement this for themselves.

ocsele commented 3 years ago

you're the best, thanks in the name of all noobs 😄