I appreciate your HASS component and the way it integrates in HA: given this, I'm not using it as I'm using a different approach to exploit the power of Deepstack. I don't want to continuously process snapshots from my (5) cameras on the NUC as it would use too much CPU power: yes, I could process snapshots less frequently, for example, every 2-5 seconds but I could miss important actions. My approach is to exploit the simple (read: with many false positives) on-camera motion detection to trigger the recording of short videos. They are uploaded on the NUC using the NFS support by the camera (but FTP should be the same). I wrote an hackish python script to process a fixed-length (10 s) preamble of the video looking for faces and/or people: if one is detected the video+snapshot with squared-subjects are reported by HA to my mobile phone. I process 2 shots for each second (not the full 25 frame per second of the video) looking for subjects with a minimum confidence (0.80): if they are found in more than one frame, the one reported by the script is the one with the highest confidence. The processing is shortened if a very-high-confidence (0.96) subject is early found. In this way the false positives reported by the camera are sieved by Deedstack, reporting only the true positives with a representative frame. I don't think your addons can be actually used to pursue this goal. Please note that I'm using person (from objects) and face detection to get a more sensible behavior.
It would be nice if you could include a similar service in your addons to process an image and to report a representative frame. It could be combined in HA in a similar way: camera upload using FTP, folder watch to trigger the new service and, if positive, HA could notify the user with frame/video.
I appreciate your HASS component and the way it integrates in HA: given this, I'm not using it as I'm using a different approach to exploit the power of Deepstack. I don't want to continuously process snapshots from my (5) cameras on the NUC as it would use too much CPU power: yes, I could process snapshots less frequently, for example, every 2-5 seconds but I could miss important actions. My approach is to exploit the simple (read: with many false positives) on-camera motion detection to trigger the recording of short videos. They are uploaded on the NUC using the NFS support by the camera (but FTP should be the same). I wrote an hackish python script to process a fixed-length (10 s) preamble of the video looking for faces and/or people: if one is detected the video+snapshot with squared-subjects are reported by HA to my mobile phone. I process 2 shots for each second (not the full 25 frame per second of the video) looking for subjects with a minimum confidence (0.80): if they are found in more than one frame, the one reported by the script is the one with the highest confidence. The processing is shortened if a very-high-confidence (0.96) subject is early found. In this way the false positives reported by the camera are sieved by Deedstack, reporting only the true positives with a representative frame. I don't think your addons can be actually used to pursue this goal. Please note that I'm using person (from objects) and face detection to get a more sensible behavior.
It would be nice if you could include a similar service in your addons to process an image and to report a representative frame. It could be combined in HA in a similar way: camera upload using FTP, folder watch to trigger the new service and, if positive, HA could notify the user with frame/video.