Closed clandestine-avocado closed 4 years ago
You can either do:
target: Person, Car, etc
which results in the entity:
image_processing.rekognition_person_car_cam_1
However, it appears to me as though it will only give numeric_state > 0
if both person and car are detected together.
or, you could do:
Two (or more?) separate config entries, each containing only 1 target, which will result in two separate image_processing
entities if you wanted the option to call each image_scan individually:
image_processing.rekognition_person_cam_1
image_processing.rekognition_car_cam_1
I went w/ scanning for a person first then car but set a condition that image_processing.rekognition_person_cam_1
is below: 1
(no person detected) before scanning for a car to reduce the number of scans.
I would love this too.. but would like to avoid adding multiple configs.. One for car, one for person and so on.
So this should work, but requires to edit the code. I have changed the "target" to a lowercase list:
target = [x.lower().strip() for x in targets.split(',')]
and changed the get_label_instances to count the number of present targets:
def get_label_instances(response, target):
"""Get the number of instances of a target label."""
count = 0
for label in response["Labels"]:
if label["Name"].lower() in target: # Lowercase both to prevent any comparing issues
count += len(label["Instances"])
continue
return count
So if target is "Car, Person, Dog", it would become ["car","person","dog"] and if there is detected 2 person and 1 dog in the image, the instance count would be 3.
Maybe @robmarkcole could add something like above to the code, so it is possible to do this ? It would also make it cheaper (I know, it's already cheap) to use the Rekognition service as only 1 image would be send, instead of sending the same image for person, one for dog and one for car.
@My-userid I am able to scan for both humans and cats (to know when to feed the local stray) from one API call. Im doing it through home assistant and node red. I downloaded Attributes from HACS. I use this to create an additional sensor for cats from all of the other objects (and their confidence %) Rekognition recognizes in an image. Then, I created an automation to alert me when this attribute sensor is above 80%. Only uses 1 API call. In theory, you could create many sensors for different things with Attributes
@roygbiv856 that sounds neat, I didn't know about Attributes from HACS. Would you care to share your Attributes config and your NR flow?
Sure, @ha14937. Make ya a deal. If you check out the issue I opened today, I'll share it with you. That's fair, right? First, install Attributes from HACS. Here's my config.yaml entry for a cat sensor using Attributes. Notice the entity listed is from image_processing.rekognition_person_fy1. That's not a typo, its pulling the attribute from my target-person config entry that I originally set up for rekognition. Here's my primary node red flow . 90% is taken from the flow posted by @robmarkcole with a few modifications. Here'e the secondary flow for recognizing cats
Hi gents, you could cross post that info on the forums, definitely of wider interest. Cheers
@roygbiv856 : Thanks for sharing. Your first flow is almost the same I use. I didn't know about the "Attributes" from HACS so I think that would solve my problem, without editing the code.
As an alternative I tried looking for an AWS Rekognition Node so I could do it directly in NodeRed, but haven't found anything I could use.
I opted to change the component code. Simplified my setup a lot. Now I only have one amazon registration per camera, that covers both car and person as target. (or more if wanted)
I get bounding boxes only on those targets that satisfies the confidence level.
Also changed the naming so the target is not part of the name, so only imageprocessing.rekognition
And the states number is the total number of objects that is => confidence. Works great :)
Oops it wasnt @robmarkcole's flow, but another users.
So, did you end up using attributes or did you alter the actual code of the rekognition integration?
I did end up altering the code of the rekognition integration.
Instead of attributes, you can also use template sensors. Saves you the custom plugin.
Multiple targets is implemented in deepstack if anyone fancies making a PR
moved to In Progress
Done
Something like this?
target: Person, Car, Dog